|
|
Subscribe / Log in / New account

So what exactly *is* in the cards, then?

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 7:57 UTC (Sat) by paravoid (subscriber, #32869)
In reply to: So what exactly *is* in the cards, then? by koverstreet
Parent article: Bcachefs goes to "externally maintained"

> Broken release process is exactly why bcachefs-tools isn't in Debian as well; the package maintainer who took it upon himself to package bcachefs-tools in Debian put project rules ahead of shipping working code, then broke the build and sat in it - and I got stuck with the bug reports.

Debian was not even close to the topic at hand, and yet you felt the need to bring it up, just to attack someone, and with information that is misrepresenting the truth. This is something you've done before, and you were very recently called out in lkml for it. Stop.

To correct the record: bcachefs-tools is not in Debian because Kent was impossible to work with and personally attacked, smeared and/or alienated multiple sets of distinct contributors that attempted to work with him in good faith, one after another. It was ultimately removed from unstable because noone was able to get through. Source: I am one of them.


to post comments

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 11:44 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (44 responses)

The specific, technical issue was the package maintainer switching out the Rust dependencies for the packaged versions from Debian. I explained that this was a bad idea at the outset, because it invalidated all the testing we do, and the Debian package wasn't replicating that testing; it was also wholly unnecessary because Rust dependencies are statically linked.

He did so anyways, and then swapped out bindgen for an old version that was explicitly unsupported according to the Cargo.toml, which broke the build, and he sat on it and Debian users stopped getting updates (I didn't even see a report until months later).

This resulted in users being unable to access their filesystems.

There was briefly a buggy version of bcachefs-tools that couldn't pass mount options correctly; users in every other distro got a fix quickly, but Debian users did not - and we found out about this when a lot of users weren't able to mount in degraded mode after having a drive die.

What you're doing is conflating technical criticism with personal, and then using that as an excuse to ramp up the drama. Technical criticism, including pointing out failures of processes, has to be ok for engineering to function, otherwise we don't learn from our mistakes. That can make for a harsh learning environment, but when you're shipping critical system components that have to work, that's what you signed up for; we have responsibilities.

The person in question was warned explicitly that what he was doing was a bad idea; he could have at any point said "this is too complicated an issue for me to handle; I'll let someone else take this one" (and there are mechanisms in Debian process for obtaining exceptions to process rules that could have avoided this, by simply skipping the Rust dependency unbundling with a clear explanation of why); he ignored advice and plowed ahead, and a lot of people were affected by those actions.

When we work on this kind of code, we have to be responsible for the work we do, including our mistakes.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 14:25 UTC (Sat) by ma4ris8 (subscriber, #170509) [Link] (43 responses)

My goal is to show the power of listening to the other. I'll try to listen first, and then answer.
I hope that you get my point of listening well in order to carefully heal the relationships.

Listen part: I'm trying to repeat roughly the same as you wrote above, to show that I listened you:

First you state that maintainer switched Rust dependencies for the packaged versions from Debian.
You explained that it was a bad idea, for multiple reasons: statically linked dependencies, and
invalidating all your active testing.

He changed Rust dependencies anyways, and then swapped out bindgen into older version,
which broke the build for Debian, and file system users stopped getting updates.

Important end question: Did I repeat (re-phrase in text) precisely what you wrote?

Answer part:
You wrote many items into one message. I answered only for the first one,
to keep the answer small enough. Some progress, but further messages
could increase coverage.

For me it sounds like there were some mistakes done by both you and others.
The unfortunate end result was, that Debian users had problems with the bug.
I didn't get from your message, the outcome of the relationships between persons:
whether personal relationships were worsened, stayed the same, or healed in the
end (each relation individually).

How to communicate (listen) effectively, to heal relationships?

This way of listening is mentioned in
https://www.verywellmind.com/what-is-active-listening-302...
"Paraphrasing and reflecting back what has been said"
( Those who know psychology, know these things ).

What I showed, is one way to restore human relationships, with Linus and others:
You could try to restore relationships with just listening others. Choose carefully
messaging cases, in which you think that you won't cause much backslash,
but you could have progress with healing the relationship by listening to the other.

If you get a backslash, you was just given an opportunity to listen the complaint.
Repeat in nearly the same words the whole complaint,
so that the other one feels of being heard fully.
Try to at least have progress, thus please listen carefully the mentioned
complaint by repeating it. You can have pauses, like answering another day, to reduce the burden.
Please don't open up any new problems. If you do (I do mistakes sometimes),
and get a backslash as a heated answer, please listen and repeat it carefully,
to reduce the impact.

By doing this just very slightly to not burden others,
you could both improve your communication skills,
and perhaps others could learn from it too,
and perhaps then relations with other stakeholders, like maintainers,
and Linus, could be restored into a level that you can co-operate efficiently together again.

I've seen that sometimes this listening technique helps on-line, in addition of meeting face to face.
I'm trying to improve my communicating skills in the contexts of
change management for "OWASP top 10", and AI adoption.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 18:21 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (42 responses)

> For me it sounds like there were some mistakes done by both you and others.

Why are you trying to bothsides this?

You seem to have the facts straight, but I'm not at all clear on what you think I did wrong.

All this was explained clearly, calmly and patiently to the Debian package maintainer when he started; he decided to do it his way, and when the breakage became apparent I asked if he was going to fix it and he just said "nope, too complicated" and walked off. So I got stuck with warning bcachefs users away from Debian, and he wrote a screed of a blog post about how impossible I am to work with.

Sorry, but from where I sit that just looks crazy.

I'm all about focusing on the human aspect, sitting down with people and having open and honest conversations. I do that regularly, and believe me I and others have tried ratcheting down the tensions, bringing the focus back to the technical and looking for ways to make this easier and take things in little steps.

The whole rest of the 6.16 merge cycle after the journal_rewind fiasco was just that, from myself and others; we've tried to bridge the gap, bring the focus back to the technical, look for ways to make things work - it doesn't seem to be getting us anywhere.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 21:00 UTC (Sat) by josh (subscriber, #17465) [Link] (40 responses)

> You seem to have the facts straight, but I'm not at all clear on what you think I did wrong.

You don't demonstrate any degree of understanding of why requirements other than your own matter. You talk about what the Debian maintainer did, and how you told them not to. You don't talk about why those requirements exist and what you did to help them meet those requirements. You act like the story begins and ends with "I told them no and they didn't obey".

This is on par with what happens with the Linux kernel. You don't demonstrate and communicate that you understand requirements other than your own and place weight on them. You just act like they're obstacles to getting *your* requirements met, and try to work around them.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 21:48 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (39 responses)

No, I told them it was a bad idea, and I explained clearly and simply why that was the case; they just said they weren't going to do that because they didn't feel like breaking with Debian process, there were no technical counterarguments.

It's not "he didn't obey", it's "he did something stupid that I warned him was a bad idea and then he didn't stick around to resolve the situation and a has to deal with the fallout".

It's not an authority thing, it's just about making good decisions being responsible for your decisions.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:09 UTC (Sat) by josh (subscriber, #17465) [Link] (33 responses)

> No, I told them it was a bad idea, and I explained clearly and simply why that was the case; they just said they weren't going to do that because they didn't feel like breaking with Debian process, there were no technical counterarguments.

Package upstreams vs Debian process typically ends with "your package is not more important than our consistency"; that is a reliably predictable outcome. If you want to *change* Debian process or policy, that's a conversation that requires a detailed case for doing so, which requires understanding of why the requirements are what they are, not just why you want them to be different.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:43 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (32 responses)

Debian has processes for obtaining carvouts/exceptions for critical system packages, naturally with more review. E2fsprogs used them and that's what should have been done here; there was no need to rush packing bcachefs-tools for Debian.

Debian

Posted Aug 31, 2025 1:29 UTC (Sun) by comex (subscriber, #71521) [Link] (31 responses)

I'm so confused about the Debian situation. If I unpack the original blog post [1] where the Debian maintainer of bcachefs-tools was complaining about it, there seem to be three separate issues at play:

(1) In April 2024, Debian unstable was shipping too-old versions of some packages. In particular, bcachefs-tools wanted bindgen 0.69.4 (released upstream 2 months prior), while Debian unstable was shipping 0.66.1 (released upstream 8 months prior).

(2) In April 2024, Debian unstable was shipping too-*new* versions of some packages. In particular, bcachefs-tools wanted rust-errno 0.2.x, while rust-errno 0.3.0 had released upstream 14 months prior, and Debian unstable was shipping 0.3.8.

(3) Despite these conflicts happening in Debian unstable, the Debian maintainer seemed more concerned about how bcachefs-tools would be maintained in the future in Debian stable.

To me these seem like three different problems with three different solutions.

(1) If Debian unstable was shipping old versions of some dependencies, then Debian should have updated those packages. Perhaps other dependents would have broken with newer versions of the dependencies, but AFAICT there was no specific evidence of this. 2 months (the age of bindgen 0.69.4 at the time) sounds to me like a reasonable lead time for a dependency. If Debian’s processes make it too hard to update Rust packages at a reasonable pace *in unstable*, then maybe they need to be changed, but I don’t know whether that’s true or whether the issue was something else; perhaps the maintainer's stated lack of experience with Rust packaging.

(2) If bcachefs-tools was depending on old versions of some packages, then bcachefs-tools should have been updated. The maintainer could have submitted a PR upstream. That would be easier said than done if this were something like Kubernetes [2], but in this case the blog post only cited 2 packages that needed to be updated.

As for (3), I don’t fully understand the problem. Debian stable freezes the entire set of packages. That includes the Rust packages, but also bcachefs-tools and the kernel. Some Linux distros have “hardware enablement” branches where they upgrade the kernel separately from the rest of the system, but AFAIK Debian does not. So why would someone maintaining bcachefs-tools on stable care what is happening upstream?

Overall - I'm sure there are some factors I'm missing. But every time I've seen this come up, even the knowledgeable commenters seem to smoosh the issue into "bcachefs-tools is not stable enough for Debian", and to me that really seems like an oversimplification and misunderstanding. Does anyone have additional light to shed?

Sources:
[1]: https://jonathancarter.org/2024/08/29/orphaning-bcachefs-...
[2]: https://lwn.net/Articles/835599/
For version history:
https://crates.io/crates/bindgen/versions?sort=semver
https://tracker.debian.org/pkg/rust-bindgen-cl

Debian

Posted Aug 31, 2025 1:49 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (4 responses)

He's missing the bigger picture, and he's also getting way ahead of himself.

bcachefs-tools updates probably can't follow the Debian "hard freeze for two years" model, and this comes up in other critical system packages, too. _Maybe_ they can, but it's too early to be making those kinds of assumptions and locking us down any particular path.

The big concern is that just because a user is running Debian stable they may be running a newer kernel (for drivers, generally), and we want bcachefs-tools to be in sync with the kernel. It's not strictly necessary, we have more compat options than other filesystems (due to in-kernel repair being first class), but it puts us in an uncomfortable situation.

Debian may not have official "hardware enablement", but it's still commonplace to pull in a newer kernel from a different channel, and that's expected to work. The kernel has hard requirements about not breaking userspace for exactly the same reason; bcachefs takes the same approach. Upgrades and downgrades should always work; that's a huge part of what we've been working through in the experimental phase.

If we have to ship/backport a new bcachefs-tools for Debian stable users, unbundling Rust dependencies at all completely breaks that.

But the bigger point is that it's too early to even know what backports are going to like for bcachefs, and we don't want to be in Debian stable at all yet.

_But_, for the people in Debian and are running bcachefs now, they still need a supported and working filesystem and process for shipping bugfixes. That's the issue that needs to be solved today for any Debian users to be running bcachefs, not "how do we support Debian stable users for the non-experimental version of bcachefs that will be getting backports and doesn't even exist yet".

The other big thing to note that makes debundling really problematic is that Debian is not the only distro. If other distros were unbundling (thank god we got Fedora to agree not to), and their Rust library versions are not in sync - see where that puts us? The last think I want is to get sucked into dealing with is different distros with different conflicting library requirements.

It's not the end of the world for things like rust-errno; I would have groaned at that one, but swapping that one out for a different version is unlikely to cause real breakage.

Bindgen, OTOH - FFI stuff has the very real potential to introduce the nastiest sort of heisenbugs which won't be caught by the compiler (they have happened and they are _not_ fun), and even I wouldn't trust my test coverage to catch all of those - and Debian does not replicate that testing. Swapping out bindgen was actively dangerous, and never should have even been attempted.

I specifically told the Debian package maintainer that that one was dangerous to change, and he did it anyways...

Debian

Posted Aug 31, 2025 2:11 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (2 responses)

As an additional note, any time distros make changes without contributing those back there's a real risk, because they don't do the same testing and QA that we do.

We had another example of that from just yesterday: Arch flipped on LTO, and it turns out that produces a miscompilation, because the final link is now done by rustc which has different rules than C code about eliding bounds checks.

This one was minor, it just caused the progress indicators on data jobs to display incorrectly, but it's quite the scary bug.

If distros want to make these changes (and LTO is a perfectly fine thing in principle), we really want them contributed upstream so they can get proper testing and QA.

Debian

Posted Aug 31, 2025 3:40 UTC (Sun) by jmalcolm (subscriber, #8876) [Link]

> If distros want to make these changes (and LTO is a perfectly fine thing in principle), we really want them contributed upstream so they can get proper testing and QA.

Seems very reasonable

Debian

Posted Sep 1, 2025 21:32 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link]

Correction: it seems this was a locally built package, not Arch :)

(And we're not sure if LTO was responsible, we haven't had the time to root cause it. But we did see a disassembly of a definite miscompilation, and those are mildly terrifying).

Debian

Posted Aug 31, 2025 3:47 UTC (Sun) by jmalcolm (subscriber, #8876) [Link]

> Upgrades and downgrades should always work; that's a huge part of what we've been working through in the experimental phase.

Thank you for this. In my experience you have succeeded.

> bcachefs-tools updates probably can't follow the Debian "hard freeze for two years" model

Agreed. In a distro like Debian, I do not see how you adopt something like bcachefs until bcachefs itself has stabilized enough to flow into Debian Stable. If you are going to try, you have to be getting the kernel and userland from outside of Debian.

> it's still commonplace to pull in a newer kernel from a different channel, and that's expected to work

Sure. But when there is a userspace component, a "working" kernel is not enough.

Debian

Posted Aug 31, 2025 9:14 UTC (Sun) by paravoid (subscriber, #32869) [Link] (25 responses)

That's not exactly right - perhaps I can help fill in the blanks.

Originally bcachefs-tools was a C program. Jonathan Carter maintained it in Debian. At some point it gained optional Rust dependencies, some of which were not in Debian, and it looked difficult so Jonathan elected to opt-out of the Rust parts. The package was also poorly maintained, which I reported in all transparency to the BTS https://bugs.debian.org/1066929. I also talked a few times in private to Jonathan, and I understand and respect his circumstances.

I stepped up to help out, in the Debian BTS and in private, to bring the package back to Debian standards and enable the Rust parts - this was https://bugs.debian.org/1060256. I also got in touch in private with the kernel maintainers (Salvatore Bonaccorso) as to enable bcachefs in the kernel package: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1054620#15

In terms of Rust dependencies: for some Debian was behind and/or missing; for several of them, Debian was ahead; most of them were spurious/cruft on the bcachefs tree and not actual dependencies. I engaged with upstream with PRs, #203, #204, #205, and worked with folks from the Debian Rust team to bring the rest of them to Debian. All were merged (except one of the three that Kent reauthored as-is, dropped any credit, and silently closed the MR. Whatever.). You can find a summary at https://bugs.debian.org/1060256 and also gave a similar summary to upstream with https://github.com/koverstreet/bcachefs-tools/issues/202

Steinar H. Gunderson did a ton of work of adjusting the package to enable the Rust dependencies, communicated promptly with the maintainer, and also build out-of-Debian packages for it for others to test. Him and I talked quite a bit with each other and talked about comaintaining the package. I know he had talked to upstream as well through IRC etc.

In terms of bindgen: at the time bcachefs-tools was relying on a custom fork, bch_bindgen, of an older bindgen that Kent made, which we could potentially vendor, but it didn't seem right. Our work in Debian is to look at the whole ecosystem, and avoid carrying multiple forked versions that every upstream vendors, if possible. According to Kent IIRC it was also kind of a hack. (Note that the package worked without this fork + a revert of the commit before the fork. Just not on i386, plus it was a custom patch that we shouldn't carry.) So I brought this up on the bcachefs bug tracker, and another bcachefs contributor, Thomas Bertschinger, found a way to fix this struct packing/alignment issue in a Rust-upstream-acceptable way with https://github.com/rust-lang/rust/issues/59154. He was awesome, and and him I had very collegial private exchanges about this.

I'm not a Rust person, so I talked to a few folks in the Debian Rust team abut updating bindgen in Debian, from 0.66.1 to 0.69.4, but it's a complicated transition and it would take more time. Instead as a stopgap, I filed https://bugs.debian.org/1078698 with a backport the patch to 0.66.1 (cleanly applied). Rust folks uploaded 0.66.1-9 with this patch a few months later. This was the last blocker. Newer versions of bindgen found their way to Debian after that (trixie was released with 0.7.1).

The package was going to be there in ~Oct 2024: latest version, all features enabled, no custom patches, no lagging dependencies. I'm 100% confident it would have made it into the trixie release.

All in all, several people collaborated across the ecosystem to make this happen. The whole ecosystem benefited: bcachefs upstream became better as a result of this work, Debian shipped more and updated Rust software, Rust/bindgen upstream gained code to address a real user need. Several of us were in touch with each other in public and in private, collegial to each other, respecting each other's work, dragging each other forward when one of us was unable to make progress, either because of personal circumstances, lack of time, or lack of knowledge in a particular ecosystem. All in the collaborative spirit of large scale open source software development.

Then Kent started throwing profanities and "PSAs" all around and started treating people like shit in public across multiple mediums, often talking them down in their absence, including unsubstantiated attacks in /r/bcachefs/ that he subsequently locked giving no opportunity for a rebuttal. I quoted one of these emails in another comment here. He also started, late in the game, bringing up Debian as a problem as a whole, its "lack of flexibility" and its policies around vendoring as the problem and... expecting Debian to change its Rust policy (right or wrong) across the board... for a leaf package for an experimental filesystem that ~noone is using. (Again, we had a perfectly working package at that time.).

Kent around that time showed the same aggression to his peers and sense of entitlement (IMHO) in Linux, ultimately resulting in a CoC violation and public spats with Linus about the -rc merge policy respectively, so it became clear, at least to me, that this wasn't an isolated incident, and working with this individual is not how I want to be spending my limited, volunteer time.

And that was that. All of us that had been involved in the Debian package halted our involvement, package was removed from unstable and orphaned and noone has picked it up yet.

I don't know why Debian was brought up on LWN and in this thread specifically. This is another case of completely unrelated revisionist history-making in my book (same as it has been happening e.g upstream w/ btrfs) which is why I felt the need to intervene.

HTH!

Debian

Posted Aug 31, 2025 12:59 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (23 responses)

You do cover a lot of good work (Thomas Bertshinger did the really critical thing of figuring out a workable solution to packed and aligned, which Rust still does not handle - that one was painful).

But you're leaving out the critical part, which is where bcachefs-tools updates stopped, a bugfix for option passing didn't go out (FFI bug, naturally), for months, I was not kept in the loop, and I found out when I started getting a bunch of bug reports from Debian users who couldn't mount after they'd lost a drive.

All the work you're talking about regarding the Debian ecosystem: that's great, but if you're putting that ahead of the most basic "make sure things work for users", you got your priorities wrong.

That's the screwup.

And I didn't see anyone you mentioned showing any concern for the testing and QA aspects, the (very real) concerns with using a different version of bindgen introducing FFI bugs, and I didn't see any of you responding to the user bug reports that I got from people who couldn't mount their filesystems.

This is the thing that you need to understand: if you cannot get these priorities straight, I do not want you anywhere near bcachefs or bcachefs-tools.

Making sure things work for the end user comes first.

All that work that you're talking about re: the custom bindgen fork, there was _no reason_ for that to take precedence over making sure package updates with bugfixes could go out. Package updates were working previously, that broke, and there were consequences for users.

That PSA came out after I spoke with the package maintainer, asked him if he was going to resolve this, and got a flat "no". Debian was shipping a broken package, and with an unresponsive maintainer I had no other option but to tell bcachefs users to steer clear of Debian because things had gone so off the rails; my job is to make sure bcachefs users have working filesystems and this was a very clear breach of trust that I had no other way to resolve.

Debian

Posted Aug 31, 2025 17:09 UTC (Sun) by pbonzini (subscriber, #60935) [Link]

> the (very real) concerns with using a different version of bindgen introducing FFI bugs, and I didn't see any of you responding to the user bug reports that I got from people who couldn't mount their filesystems.

As you should know, bindgen can generate tests that check that the offsets and sizes of the Rust fields match the ones it computes of the C fields. So it should not be hard to compare those generated tests across the two releases.

You need to stop operating on the principle that a) you can command other people b) everybody who disagrees with you is an idiot. Good luck.

Debian

Posted Aug 31, 2025 17:59 UTC (Sun) by jdelkins (subscriber, #166490) [Link]

I am "rubbernecking" this matter, in the voyeurism sense, as a possible future bcachefs user. My perspective is based only on reading the public discourse. I'm neither a Debian nor bcachefs user.

The Debian social contract (https://www.debian.org/social_contract) says "... 4. Our priorities are our users and free software. We will be guided by the needs of our users and the free software community. We will place their interests first in our priorities. ..." The question is, was this followed or not?

I think that Kent is arguing that critical bugs meaningfully affecting a substantial portion of bcachefs users who happened to be running Debian were harmed by a lapse in applying this principle for their benefit.

Some Debian project folks are/were thinking, *perhaps* (hard to gather who is who as I don't know anyone), that bcachefs users are an insignificant subset of Debian users, and that the greater good of prioritizing the packaging/unbundling policy yields long term benefits to so many more users, that there's no overarching reason to make exceptions to this policy for every experimental/fringe feature of the OS, at least not this one.

Matter of perspective. I don't get a vote. I do tend to more easily sympathize with Kent's side because, being an fs in mainline kernel, regardless of it's status or size of it's user base, legitimizes bcachefs as a critical system. OTOH, I have a hard time imagining what it's like to support an install base the size of Debian. It may be that tradeoffs that severely hurt 10^2 users, in exchange for for insignificant or presumed long-term benefits to 10^8 users is necessary and commonplace, so commonplace that the tradeoff isn't or can't be explicitly weighed case by case.

Unfortunate if that's true, but does suggest that all parties would benefit from keeping experimental features and their tooling out of Debian as much as possible.

Debian

Posted Aug 31, 2025 20:22 UTC (Sun) by Lionel_Debroux (subscriber, #30014) [Link] (4 responses)

When the then-current maintainers of bcachefs-tools got (rightfully, IMO) disgusted by your attitude, and proceeded to orphan + remove the package from Debian unstable, as paravoid described... well, of course the updates of the package stopped, how wouldn't they have ?
Since Linux distros can't remotely forcibly remove packages from user computers, the package maintainers did what they could to signal "you shouldn't use that filesystem" to end users. At that stage, it was clear that bcachefs ought to be avoided due to risks over the long-term maintenance, despite your dedication to your filesystem.
The users who are brave enough to use an experimental filesystem always get to keep the pieces if things go sideways for whichever reason...

On Debian's side, the initial mistake might have been to ever publish a package for bcachefs-tools before your experimental filesystem became 1) more stable and 2) properly packageable. At least, the technical solution mentioned by paravoid may benefit the general ecosystem.

Debian

Posted Aug 31, 2025 20:31 UTC (Sun) by pizza (subscriber, #46) [Link] (1 responses)

> Since Linux distros can't remotely forcibly remove packages from user computers, the package maintainers did what they could to signal "you shouldn't use that filesystem" to end users.

"no longer maintained, don't use this any longer" and "maintained but there's no need for a new release" should not share the same "signal"

Debian

Posted Sep 3, 2025 8:00 UTC (Wed) by dsfch (subscriber, #176007) [Link]

If any message, however worded, comes across as "maintained but there's no need for a new release [EVER!]" then there's a miscommunication on first principles. Software - and worse so, its uses/usecases - evolves. Even mathematically-proven-"perfect" software will find itself in the situation that it lacks features or is found to have unwanted - even if potentially not "undescribed" - side effects.

If it's not clear to users from the very beginning what "no more releases" means, and that makes the difference between _retained_ and _maintained_, then someone has been communicating in a way-too-people-pleasing style.

Debian

Posted Sep 1, 2025 22:02 UTC (Mon) by comex (subscriber, #71521) [Link] (1 responses)

I don't think that's quite what happened.

In the comment you replied to, Kent cited a lack of updates (and lack of communication about this) as motivation for posting his "PSA" [1], which in turn motivated Jonathan Carter's decision to orphan bcachefs-tools a few weeks later [2]. So the lack of updates was already an issue before the orphaning.

However, according to paravoid's comment, this was expected to be resolved by around October 2024. In other words, the effort was well underway and only two months from completion when Kent posted the PSA in August 2024 and caused the effort to be abandoned. If Kent had just not posted the PSA, he would have gotten most of what he wanted.

But not all of what he wanted. If I'm understanding paravoid correctly (I might not be), there are two things that would be fixed: (1) bcachefs-tools would start being updated regularly in Debian, and (2) bcachefs-tools' dependency versions would now be compliant with the upstream dependency version requirements specified in Cargo.toml. However, the dependencies would still come from system packages, and would not be the exact versions bcachefs-tools is testing against (as recorded in Cargo.lock). Kent indicated in the PSA, and continues to insist in this comment thread, that he wants unbundling and an exact version match, so that Debian users could benefit more from upstream testing.

Regardless, the PSA itself was rude and vague, so I can understand why this would drive people to orphan the package. Though I think Jonathan Carter could have done a better job explaining the decision.

[1] https://lore.kernel.org/linux-bcachefs/36xhap5tafvm4boiy3...
[2] https://jonathancarter.org/2024/08/29/orphaning-bcachefs-...

Debian

Posted Sep 1, 2025 22:20 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link]

I posted the PSA because I was getting bug reports from Debian users that were unable to access their filesystems and the package maintainer was unresponsive.

You don't just stop doing updates because a change you made broke the build: at the very least do that in a topic branch, don't develop on HEAD.

Debian

Posted Sep 1, 2025 7:28 UTC (Mon) by epa (subscriber, #39769) [Link] (15 responses)

Making sure things work for the end user comes first.
That’s true for a stable distribution, which nobody would dispute, but aren’t we talking about Debian unstable here? Surely anyone running unstable has made a choice to accept somewhat less tested software to help develop and test a future stable release.

As a non-Debian-user I know that unstable is nonetheless expected to produce a working system, and Debian testing is where the real breakage happens. Still it might not be a good fit for the model Kent Overstreet is expecting, where the stability of the filesystem takes priority over everything else.

Would it not be better to say that bcachefs in Debian unstable is explicitly experimental, not to be used in production systems, and bug reports should be fielded by Debian rather than by upstream? And those wanting a closer link to upstream and a higher level of “support” should use a different distribution, or install the kernel and support program from outside the Debian package repositories.

Debian

Posted Sep 1, 2025 8:07 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (5 responses)

> Would it not be better to say that bcachefs in Debian unstable is explicitly experimental, not to be used in production systems, and bug reports should be fielded by Debian rather than by upstream?

Not really - the role of unstable is, for the most part, to exist as a place for packages to migrate into testing and, in the end, become part of a stable release. There are various cases where packages can be explicitly excluded from testing migration (the easiest way to do so is to file an RC bug against them), but that's typically because there's a known issue during a transition that's currently being worked through. There's a separate experimental repository that's an extension to unstable rather than a complete distribution in itself, and which doesn't trigger any sort of migration. It's also weird in that even explicitly enabling it by adding a source won't allow you to install packages from it - you need an explicit "apt -t experimental" statement to pull from there.

So, if the goal was to have an explicitly unsupported package, there's a way to do that in Debian, but putting a package in unstable and preventing migration to testing wouldn't be the right way to do it.

Debian

Posted Sep 1, 2025 20:05 UTC (Mon) by jrtc27 (subscriber, #107748) [Link]

src:firefox has done precisely this for over 9 years; see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=817954.

Debian

Posted Sep 25, 2025 7:33 UTC (Thu) by daenzer (subscriber, #7050) [Link] (3 responses)

> There's a separate experimental repository that's an extension to unstable rather than a complete distribution in itself, and which doesn't trigger any sort of migration. It's also weird in that even explicitly enabling it by adding a source won't allow you to install packages from it - you need an explicit "apt -t experimental" statement to pull from there.

To be pedantic, if a package is available only in experimental, apt selects it for installation even without -t experimental (or the /experimental suffix). However, -t experimental is still needed if the package depends on a version only in experimental of a package also available in another suite.

Debian

Posted Sep 25, 2025 8:56 UTC (Thu) by taladar (subscriber, #68407) [Link] (2 responses)

Can't you just change the priority with apt pins? Or is experimental treated differently from any other repository?

Debian

Posted Sep 25, 2025 9:12 UTC (Thu) by daenzer (subscriber, #7050) [Link] (1 responses)

> Can't you just change the priority with apt pins?

You can, offhand I'm not sure why that would be preferable though.

Debian

Posted Sep 26, 2025 7:52 UTC (Fri) by taladar (subscriber, #68407) [Link]

Because that way future versions of the same package will also be installed from the repository where you originally installed it from while the -t option is a one time thing.

Debian

Posted Sep 1, 2025 8:18 UTC (Mon) by tux3 (subscriber, #101245) [Link]

A package could stay in Debian experimental, and it would not move down to Debian sid or testing without manual action. Regular Debian users won't see it without specifically opting in.
That's the current state of bcachefs-tools in Debian. In _principle_ I think it could receive updates while staying in experimental, until the pace of urgent fixes starts to slow down a bit and it seems ready to move down.

Now I don't know whether there's any appetite to do that on the Debian side (or on either side, really).
It's also a little less interesting if only the userland tools are packaged, but users must still switch out Debian's kernel for a version that may or may not be in sync with the -tools.

Debian

Posted Sep 1, 2025 12:47 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (5 responses)

If you're trying to build reliable software, the only purpose of an "unstable/experimental" label should be as a warning to the end user that things are not in the ideal state yet.

It's _not_ an excuse for the implementer to not follow all our normal best practices.

Shipping rock solid dependable code is less an end state than a process. If you ever want to get there and lift that experimental label, you have to stay on top of bug reports and keep the focus on making things as usable, reliable and solid as possible, every step of the way.

It's not just the code that we're developing and improving, it's our processes, too: How do we make changes without being disruptive, and when we screw up (bugs are a fact of life), how do we get the fixes out quickly to minimize damage.

You want all of that to be well ironed and smooth before taking the experimental label off, not after.

Debian

Posted Sep 1, 2025 13:00 UTC (Mon) by epa (subscriber, #39769) [Link] (4 responses)

I completely agree but my point is that perhaps Debian unstable is not the right place for this ironing-out to happen.

It would however be a good place for the Debian-specific packaging issues to get sorted out (stabilizing the version of dependencies and so on) once the upstream work is complete.

Debian

Posted Sep 1, 2025 13:42 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (3 responses)

No, that would make Debian as a whole look pretty broken.

Debian may have some issues (a bit rigid and inflexible, with a habit of applying more specific rules inappropriately when they shouldn't override the more basic "make sure things work"), but that's endemic to any big project, and the public conversations help with that.

This totally could have worked if we had a package maintainer who was a bit more on the ball and experienced in filesystem matters, as well as Debian process. As mentioned elsewhere in the thread, there are specific processes for obtaining carveouts for critical system packages; the Debian maintainer explicitly said he didn't want to do that and that's where things started to really go off the rails.

The kernel has been telling companies everywhere for a long time (when I was at Google this came up frequently) to "work with upstream". That's a great philosophy, and it applies to distributions and the kernel as well: work with the people you're pulling from, instead of just trying to dictate or make your own private changes, and make sure it's a two way conversation.

Two way conversations always get better results :)

Debian

Posted Sep 1, 2025 20:25 UTC (Mon) by GNUtoo (guest, #61279) [Link]

Personally I don't use Debian nor bcachefs but to me from what I read here to far, it looks more that different projects (Debian and bcachefs) simply have different priorities.

As I understand here we have the following at stake:
- Testing for software that is critical for some situation or use case
- The ability to mix and match dependencies
- Static vs dynamic linking

Part of the issue is who is willing to pay the cost of certain choices. For instance extensive testing can require significant resources, and dealing with very strict dependencies or static linking in userspace also bring in costs. If these things didn't have costs, it would be trivial to satisfy and Debian and bcachefs because bcachefs would be extensively tested in all the situations.

I think that everybody do understand how critical filesystems can be for people that use them, especially when there is only one main implementation of the filesystem and that the filesystem is somewhat famous.

So I'd like to bring arguments for the other side of things as well. Here's an example for security[1]. The important takeaway here is not the technical part but rather the feeling of the presenter toward static linking, and also maybe how this affect a distribution as big as Debian, which can lead to distro wide policies about such practice. And making an exception each time an issue come in is probably not an ideal situation as at the end it would mean that the distribution priorities are not taken into account at all.

Another argument that is made less often but that I find even more important is RAM consumption: dynamic linking enable to run more programs in parallel on older hardware, but this will necessarily bring the question of who distributions like Debian are for. For instance should people with less RAM be welcome in the FLOSS communities or not? Or is the benchmarks and/or real usage speed of certain applications more important (like Browsers if they are compiled with LTO, which means that they use static linking)?

The takeaway is probably that there isn't one size-fit-all here. Personally I've been on both sides of this issue and the choice I made really with that regard really depend on the situation and/or what resources are available to tackle the problem.

For instance I currently work on GNU Boot, a distribution to replace the BIOS on some old computers, and here testing is critical because we want to allow users to easily upgrade without breaking their computers. So we basically work on our own distribution and for now we even advise to use a specific GNU/Linux distribution to run our scripts that build things if they want to build images themselves. And we also rely on users to test the images we release and report publicly such tests and teach users to look at the tests before installing images. We are also gradually moving to Guix to get tradeoffs that better suit what we want to do (same images, controlled environment, available in most distros without depending on compiler binaries and without having to necessarily build compilers).

If we instead of doing our own distribution we simply packaged the same software and configurations in most GNU/Linux distributions (Arch, Debian, Fedora, etc, the testing situation would be a mess, because a specific image built by one distribution might not be the same as one built by another one, and there would most likely not be enough people to cover everything (and honestly we already have a hard time testing all images on all hardware).

For other cases, I really prefer packaging things inside the distributions I use instead of relying on external package managers, because the later increase the size, the maintenance (more updates to do), etc. But all that also take time (resources), so sometime I simply end up packaging software only in Guix, and not in other distributions I use too (like Parabola, which is a 100% free version of Arch Linux).

So the question at the end of the day is who has to pay the bills for the choices.

And the ideal situation here is probably to find ways to have the people responsible for the choices to pay the bill, for instance by having distributions do support for bcachefs, or refusing to package bcachefs, and to also explain things to users and to let them choose what is best for their use case.

Something interesting would also be to understand how confident are distributions in supporting specific filesystems. For instance if there are people involved in the filesystem that also maintain related packages, and that filesystem is proposed by default, or in a list of defaults filesystems to choose from at the installation, then users can also choose distributions and/or filesystems accordingly.

I understand that this situation is not ideal as too much choice also complicate things, but at the end of the day, relying on users to choose what's best for them is probably the least painful way to do things when people disagree.

For instance telling that Debian doesn't support bcachefs or that extensive testing is not done and that things broke in the past is something users can understand. And Debian can also explain to users why things are done in this way, for instance to get better Q/A in general and share resources between the huge amount of packages they have to maintain.

References:
---------------
[1]https://meetings-archive.debian.net/pub/debian-meetings/2025/DebConf25/debconf25-631-static-linking-pitfalls-harms-and-chalenges.av1.webm

Denis.

First-party vs third-party distribution

Posted Sep 2, 2025 3:55 UTC (Tue) by DemiMarie (subscriber, #164188) [Link]

I think this is very similar to the “Flatpak vs distro package” debate. The arguments you are making are very similar to those made by those who want their software only shipped as an upstream Flatpak rather than as a downstream package. The only difference is that in this case, a distro can bundle all the dependencies in a tarball and upload that, though that takes additional effort (for legal review if nothing else) simply because the bundled tarball is bigger and the work used for it cannot be reused for other packages as easily.

If a distro is going to just rebuild the upstream bundled tarball without any modifications or review, I question whether the distro package is actually providing any real value. It can’t ensure that the package is trustworthy or even legal to ship, after all. It is just blindly trusting the upstream author. To be clear, I do think you (Kent) are trustworthy! I just don’t see the value in the indirection the distro provides.

To me, this seems like a better fit for Debian’s extrepo mechanism, where Debian only ships repository definitions and the packages are shipped directly from upstream. That allows users to install software knowing exactly who it came from, and it allows upstream (you in this case) to push updates that you are comfortable with.

Is this something that would make sense? My understanding is that bcachefs’s userspace could be easily shipped this way, and it would address all of your complaints. You could even ship a DKMS package for use with kernels lacking bcachefs support.

I hope this is useful. ~Demi

Debian

Posted Sep 8, 2025 19:50 UTC (Mon) by daniels (subscriber, #16193) [Link]

> the public conversations help with that

How would you say that’s going?

Debian

Posted Sep 1, 2025 17:51 UTC (Mon) by anselm (subscriber, #2796) [Link] (1 responses)

As a non-Debian-user I know that unstable is nonetheless expected to produce a working system, and Debian testing is where the real breakage happens.

As a Debian user (and developer), I don't think that is actually the case. New packages enter unstable and only get to graduate to testing when they have spent some time in unstable without serious bugs being found in them. IOW, testing is supposed to contain fewer issues than unstable. In particular, there are times when unstable can end up in something of a mess, e.g., when certain tricky transitions of important and widely-depended-upon packages happen. These are usually preannounced and coordinated, but (temporary) breakage can still occur. Testing, OTOH, is shielded from that sort of thing.

The main problem with testing, and the main reason why testing is generally considered unsuitable as a “rolling” distribution, is that testing does not receive timely security patches – as mentioned in the previous paragraph, packages which contain fixes for bugs in testing go to unstable first and only end up in testing in due course after they have proved their worth in unstable.

Debian

Posted Sep 2, 2025 7:45 UTC (Tue) by epa (subscriber, #39769) [Link]

Yes, I got the wrong idea. Unstable is the “most unstable” Debian release.

Debian

Posted Sep 1, 2025 23:00 UTC (Mon) by comex (subscriber, #71521) [Link]

Thanks for the explanation. This is a lot of information I wasn't aware of.

To be fair, this information wasn't mentioned in Jonathan Carter's blog post or in the BTS post announcing the orphaning [1]. Judging by the latter, I can't tell if Carter was even aware there was a plan to no longer need to "relax[] dependencies", though, judging by his focus on stable, perhaps he was more worried about the gap being reopened in the future.

Fair or unfair, this incident has received a lot of public attention, and it's hard to avoid "revisionist history-making" when information isn't out there.

In any case, I sympathize with not wanting to deal with that kind of aggression. My personal interest is less about bcachefs-tools itself, more about what the incident portends for the general pattern of Debian relaxing Rust package versions, which in turn plays into even broader debates… Not that I want to have those debates here; they're just the reason I'm interested in the facts. But it sounds like the aggression from upstream played more of a role than I thought, making this more of a special case than I thought.

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1078599

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:10 UTC (Sat) by lordsutch (guest, #53) [Link] (4 responses)

It's not an issue of the maintainer deciding they "didn't feel like breaking with Debian process." Maintainers either follow the process (packaging standards) in Debian, or their packages eventually get kicked from the distribution if they're not fixed to follow the rules. The distribution's rules about vendoring dependencies don't magically disappear because there the upstream thinks there are "no technical counterarguments" against bypassing them.

If an upstream doesn't want to play by Debian's rules or thinks the release process is too slow, they can set up their own package repository.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 23:42 UTC (Sat) by pizza (subscriber, #46) [Link]

> The distribution's rules about vendoring dependencies don't magically disappear because there the upstream thinks there are "no technical counterarguments" against bypassing them.

I'm sorry, but if "the distribution's rules" result in the distributed package being so broken that it directly leads to user data loss, then those rules are not fit for purpose.

Fortunately for Debian, "the rules" provide a mechanism for exceptions where necessary. If a major data loss bug isn't sufficient to qualify for a necessary exception, then I repeat myself about those rules not being fit for purpose.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 8:49 UTC (Mon) by taladar (subscriber, #68407) [Link] (2 responses)

To be perfectly honest the entire discussion sounds like Debian maintainers try very hard to apply principles derived from a world of C to Rust, in particular the whole thing about having exactly one version of each package globally.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 17:49 UTC (Mon) by q3cpma (subscriber, #120859) [Link] (1 responses)

I think you put your finger on it. If your process/package manager can't reliably support (meaning you have to break semver at some point) languages with an ecosystem that fast/unstable, better fix it and/or say to people "build it yourself".

Gentoo was pretty pragmatic on the question and simply tarballed the dependencies for Go/Rust packages (e.g. https://gitweb.gentoo.org/repo/gentoo.git/tree/sys-fs/bca...).

So what exactly *is* in the cards, then?

Posted Sep 25, 2025 7:36 UTC (Thu) by daenzer (subscriber, #7050) [Link]

If upstream allows co-installation of multiple versions, it can be done with Debian packages as well. It's done all the time, normally by adding a version suffix to the package name.

Obviously, this involves more maintenance effort compared to a single version, so there's a trade-off.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 20:12 UTC (Sun) by ma4ris8 (subscriber, #170509) [Link]

I tried to bothside this, just because I didn't want to get too involved
in arguing, whose fault is what part, to somehow keep outside
of it.

I can listen to your side though:
You tried to explain calmly, clearly and patiently to Debian maintainer,
but he wasn't co-operative, and he walked off, thus didn't communicate
any more. You warned Debian bcachefs users, and got
backslash as a blog, which stated something inconvenient from you.

I do understand, that if other side just writes bad things of you
to the world, and then doesn't listen you, then situation is quite bad:
negotiation is over, and your reputation is worse, and you can't fix
the relationship with that maintainer so easily.

So perhaps you don't need to be friend with everybody.
You could consider with whom you need to be able to co-operate
to get your work done, and have end users happy,
and see what you can do to maintain those relationships.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 12:24 UTC (Sat) by muase (subscriber, #178466) [Link] (60 responses)

Hm, for me it simply reads like OP is mentioning his frustration about release cycles, and is citing a pretty legitimate example: The fact that developers are overwhelmed with obsolete bug reports, because LTS distros are months or even years behind, is not something new and a real problem for some projects.

I know it's not the distros' fault; it's simply how LTS has to work in practice – however I can understand the frustration that arises if there seems to be an opportunity to finally update a package(set)... and then that opportunity is missed, and now the dev knows that they have to endure those obsolete bug reports for another n-year release cycle. It definitely didn't read as "just to attack someone".

> To correct the record: bcachefs-tools is not in Debian because Kent was impossible to work with and personally attacked, smeared and/or alienated multiple sets of distinct contributors that attempted to work with him in good faith, one after another.

Tbh, the only personal attack I see here is from you; and as an outsider, this is not very informative – your frustration may be absolutely legit, but this reply doesn't suit your case. If the communication is public, do you have a link or something? :)

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 18:27 UTC (Sat) by paravoid (subscriber, #32869) [Link] (59 responses)

> Tbh, the only personal attack I see here is from you; and as an outsider, this is not very informative – your frustration may be absolutely legit, but this reply doesn't suit your case. If the communication is public, do you have a link or something? :)

Kent in https://lore.kernel.org/linux-bcachefs/wona7sjqodu7jgchtx... called part of a maintainer's job as "bullshit, make-work job", told Debian to "develop a better and more practical minded attitude" and to "stop wasting my time with this stupid bullshit and I can get back to real work". The issues we had spent a lot of our volunteer time to fix were very real issues, many of them upstream, and one in the Rust ecosystem. At the time this was sent, all issues were fixed, or were on the way to be fixed, and a recent bcachefs-tools package with all of the appropriate dependencies was a few weeks away from getting to Debian testing.

bcachefs-tools was orphaned by its maintainer a few weeks later; myself and another contributor (the two of us had done all recent advancements), stopped investing our time as well. The package has remained orphaned since, for about a year. Anyone can pick it up, but noone has, and that's not because of technical difficulties (as far as packages go, it's pretty trivial).

As an aside, the very existence of this thread was as a "PSA" to his users to avoid Debian and Fedora, telling them that "you'll want to be on a more modern distro". *Two weeks later*, he responded in https://lore.kernel.org/lkml/nxyp62x2ruommzyebdwincu26kmi... to Linus that he expects the "major distros" to pick up bcachefs soon. Whether he was dishonest or just naive, I'll leave that to your judgement.

The above was just a small sample. There were literally dozens of responses of this style at time, random offensive comments etc., across multiple mediums (mailing lists, IRC, Reddit, etc.). I am not keeping a file though, as I don't feel the need to convince anyone with hard evidence. You don't know me, and I understand that my opinion may not be of much value to you. I hope, though, that you and others may see this as one tiny part of a broader pattern of countless long-time contributors across multiple projects expressing that they have been alienated and driven away by Kent's conduct and sense of entitlement, and that they have good reasons for it.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 19:20 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (57 responses)

So yes, I could have been more diplomatic in my response.

But please do try to put yourself in my shoes; that was after getting a bunch of bug reports from Debian users, and there had been a _lot_ of fail at that point in how the Debian packaging was handled.

I do have to reiterate: the unbundling of Rust dependencies should not have happened for bcachefs-tools, there was no technical reason for that, all my explanations were met with "but that's our policy", and no amount of reasoning was getting anywhere; and the Debian packager breaking the build and sitting on it just should not have happened.

I do sincerely hope you can analyze how things went from the other end and ask yourself what could have been done better to avoid this, because from my end, this was an intensely frustrating issue, and it wasn't being taken seriously and it had very real effects.

Before you start focusing on language and diplomacy, you really need ask yourself if the technical decisionmaking leading up to that point was sound. When we get breakage as bad as what happened with the Debian package, you can expect the kind of frustration I was voicing there, and "bullshit, make-work projects" still seems to accurately describe what Debain's been doing with Rust dependency unbundling.

When we're dealing with critical system components, you cannot focus just on language and diplomacy and ignore the decisionmaking; that's ignoring our most basic responsibilities.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 21:17 UTC (Sat) by josh (subscriber, #17465) [Link] (50 responses)

> Before you start focusing on language and diplomacy, you really need ask yourself

No, you really don't. There is no universe in which the things you said produced useful outcomes. The fact that they resulted in someone deciding they no longer wish to work with you or put work into being the downstream maintainer of your software is an *unsurprising outcome*.

> When we're dealing with critical system components, you cannot focus just on language and diplomacy and ignore the decisionmaking; that's ignoring our most basic responsibilities.

You also cannot completely neglect language and diplomacy and understanding other people's requirements, either, as you absolutely did in the messages being quoted here.

Your words will produce responses and actions from others. No amount of wishing things were different will enable you to say things that will predictably produce undesired actions and then have a leg to stand on when being annoyed that those predictable responses and actions happen.

Your words are a lever to be used, just like your code. Write the words that produce the results you want, and if you want to be happier, learn to not resent that as a means of effecting change.

To be clear: the words have to actually match the actions. You can't *just* say the right words but then have them mismatch your actions; down that path you'd find people whose words and truth lack even a passing familiarity. But it's important to, for instance, give people confidence that you care about the requirements they deal with, in some fashion *other* than "what windmill can I burn down so that you don't have to meet those requirements anymore and can do what I want instead".

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:29 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (49 responses)

I'm still waiting to see some reflection from you Debian folks. Debian is the only distro where this became an issue; even Fedora gave us an exception to their unbundling rule (and they gave a better reason for wanting it than anyone from Debian - build server load) and for every other distro it's been a non issue.

Maybe you guys should just admit there was a screw up so we can all move on?

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:35 UTC (Sat) by josh (subscriber, #17465) [Link] (16 responses)

You'll be waiting a long time if you have no desire to engage in a fashion other than "are you ready to agree I was right yet".

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:41 UTC (Sat) by josh (subscriber, #17465) [Link]

And to be clear, 1) I am not "you Debian folks" here, and 2) I am not commenting on whether I agree or disagree with Debian's policies on bundling *because that's not the point here*.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:48 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (14 responses)

Correct, because the Debian packager switched to a bindgen version that was explicitly unsupported, broke the build, and sat on it, and as a result users lost access to their filesystems when they didn't get the fix for mount option passing.

Do you have a rebuttal? I'd love to hear it.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 0:40 UTC (Sun) by SLi (subscriber, #53131) [Link] (13 responses)

What part of "distros use single versioned dependencies" is so hard to understand?

You keep saying "there are no technical reasons" as if that made it true.

They may or may not be the best rules, but they are there for a reason. If you think distro maintainers change version bounds on packages for no reason other than to annoy upstreams, that alone should be a big hint telling you that you probably don't understand something.

Or want to understand. I'm not sure which is more true nor which is more flattering.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 3:37 UTC (Sun) by ben0x539 (guest, #119600) [Link] (12 responses)

Did just removing the package ever come up? Surely not shipping a piece of code is preferable for all parties over shipping a known broken configuration.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 11:59 UTC (Sun) by SLi (subscriber, #53131) [Link] (10 responses)

I am not sure if it was "known broken"; the only thing I have read repeatedly here is an assertion that "this is not the exact setup I have tested and can guarantee", which is a far cry from that, and if you let every upstream use that card, you'd have no limit to the copies of random whitespace removal libraries in a distribution.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 13:01 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (9 responses)

Look at it from the package author/maintainer's perspective: doing it your way, there's no limit to the number of different untested distro forks that could end up being shipped.

If you want to do cleanup in a package for any reason (consolidating package dependencies is a good cleanup, I would take patches to do that), just get it upstream.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 13:32 UTC (Sun) by SLi (subscriber, #53131) [Link] (7 responses)

Yes, I know. There is this tension. I think the problem is not that they don't understand your perspective but that you refuse to even acknowledge that their distribution maintenance and support perspective even legitimately exists. I think distributions understand this tension well, or at least many people working on them do. I suspect the reverse direction is slightly less understood. But really I think this question in general has been hashed enough in public that anyone who cares should be able to understand the reasons for the tension both from the upstream and the distribution perspective.

There doesn't really seem to also be anything that makes your software special in this respect. In effect you just seem to be complaining that distros do this "obviously silly thing" that they generally do to all packages because it's a nightmare to maintain and security support 100k versions of each library. You haven't tested it with this combination—and that's equally true of all the other packages.

It comes across as you demanding special treatment, aggressively, while completely dismissing their technical arguments.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 13:56 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (6 responses)

> There doesn't really seem to also be anything that makes your software special in this respect.

Do you really not see the difference between introducing a bug in the filesystem vs. any other distribution package?

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 19:21 UTC (Sun) by SLi (subscriber, #53131) [Link] (5 responses)

In a filesystem that is experimental and used by a few adventurous people, likely less than one percent of ext4 or btrfs, and not explicitly supported by the distribution?

Or even compared to a popular database engine or the kernel?

Important enough to make an exception that doubles the security maintainers' workload when backporting security fixes (because you have promised stability, i.e. not breaking things that used to work)?

No, I really don't. It's just not that special. There's a lot of software that can lead to data loss or worse. And even if it was that important, the question is more about maintainability than using an upsteam blessed build. Even for keeping the system working it's not a choice of "we don't want it to work, so we drop vendored dependencies".

Don't *you* really see any maintainability benefit for having, say, one glibc? (And if you do, why do you keep repeating "no technical argument"?)

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 19:25 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (2 responses)

So, why couldn't have Debian packaging waited, instead of plowing ahead with an approach that we knew going in was going to be problematic?

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 19:48 UTC (Sun) by SLi (subscriber, #53131) [Link] (1 responses)

Waited for what? For you to be happy about them replacing the set of dependencies with what Debian ships in the distribution, and then to keep backporting security fixes only, as they do? Are we close to that point?

That's what they do to pretty much all the packages. And for most packages, it doesn't cause such a fuss. Not everyone likes it. They don't need to use Debian. (I don't either; this is not what I need most of the time, but there are very valid use cases when this is what you need or want.)

The package also never was a part of a Debian release (in fact it seems it never even gravitated to testing which is what becomes the next stable release). So they *did* wait. They haven't released the package.

Also, it's not really the number one priority of Debian to make upstreams happy. That is a nice feature if it can be had without compromising too much, and it makes their work easier, but first and foremost they cater to their users and adhere to their unique selling points. Apparently `stable` being genuinely stable (as in "doesn't change in breaking ways") is a big thing to many enough people.

For example, have you ever removed or restricted a command line option in btrfs-tools? From Debian's point of view, such a change is not suitable for stable, pretty much ever, absent really really extenuating circumstances. Think of it similarly to the kernel's userspace API compatibility promise. Debian tries to promise, within reason, that if some weird option with a typo accidentally worked and some people use it, then it keeps working until they update to the next stable release. That means taking security fixes only and backporting them, which, based on this discussion, I suspect you would find even more horrifying than them replacing dependencies with versions that are in Debian.

So, in practice, by being an uncooperative maintainer, you often *can* get what you want, if it is "don't package my software". But that's really up to the distribution, not you, once you have released it as open source.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 20:28 UTC (Sun) by pizza (subscriber, #46) [Link]

> So, in practice, by being an uncooperative maintainer, you often *can* get what you want, if it is "don't package my software". But that's really up to the distribution, not you, once you have released it as open source.

Sure, the software license grants you the right to do nearly anything you want.

But that doesn't change the essential point that packaging it badly [1] leaves everyone, the packager included, worse off.

[1] "Ignoring upstream requirements/instructions and introducing package-unique bugs that result in data loss" surely qualifies.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 20:11 UTC (Sun) by pizza (subscriber, #46) [Link] (1 responses)

> No, I really don't. It's just not that special. There's a lot of software that can lead to data loss or worse.

Does Debian care about addressing data loss bugs when they are reported?

If the answer to this is "no" then frankly that renders Debian unfit to be relied upon for pretty much anything.

Meanwhile, the bug report states that this data loss does not happen when building from pristine upstream sources, and looking at what's different from upstream vs the debian package, leads you to find out a library was unbundled/unpinned from the upstream source in favor of an older version already in Debian.

So, does the Debian packager:

a) re-bundle the library
b) update the system-wide packaged library to the newer version
c) try to narrow down the fix in the packaged library vs the pinned version, (ie "backport the fix")
d) give up and tell users to "deal with it, let it go, move on" because fixing the Debian-introduced bug is too haaaaaard
e) the above, but also attack the upstream author for unreasonably insisting that folks use a non-bugged version of the library

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 20:20 UTC (Sun) by pizza (subscriber, #46) [Link]

> So, does the Debian packager:

...Before replying, keep in mind that we are NOT talking about debian-stable here! This not-so-hyothetical package only exists in testing/unstable.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 13:36 UTC (Sun) by SLi (subscriber, #53131) [Link]

By the way, I do appreciate that you participate in these discussions and at least try to be constructive while arguing your point of view :)

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 20:28 UTC (Sun) by mbiebl (subscriber, #41876) [Link]

The package was removed from debian unstable exactly for this reason. There is no more bcachefs-tools package in unstable or trixie (current stable).

It's obvious this file system respectively its user land tools are not fit for a stable release (yet).
It would probably be a good idea to disable the bcachefs module in the Debian kernel as well, to not let Debian users be mislead about the support status. So I've filed
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1112681

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:39 UTC (Sat) by josh (subscriber, #17465) [Link]

In any case, I give up. I wrote comments here in an effort to give information about why you're getting the responses and outcomes you regularly get. Your responses make it clear that you're going to keep communicating in a way that produces the same kinds of responses and outcomes, and that you don't actually believe anything you're doing needs changing. If that's the case, then you will keep getting the same kinds of responses and outcomes, and you will keep thinking it's everyone else's fault but your own.

If your inclination is to believe this is *in any way* a question that should be redirected into an exploration of your specific requirements that you believe you were right about, you have missed the point.

So what exactly *is* in the cards, then?

Posted Aug 30, 2025 22:51 UTC (Sat) by sheepdestroyer (guest, #54968) [Link] (19 responses)

Just to understand what is discussed here, was there ever any technical reason mentioned in public from Debian for why Kent's recommendation to not unbundle rust dependencies could and should not be followed?

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 6:42 UTC (Sun) by zejn (guest, #116440) [Link] (18 responses)

Of course there is a technical reason. If you are a distributor, you only want one copy of software in a distribution. Updating one copy is manageable.

If every package would bundle libraries, it would be impossible to update packages and you'd get a security nightmare as one can see in docker containers using old base images or independent software packages, that freeze a dependency on old version of an open source library.

If a distribution is full of security holes, who's going to use it?

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 9:32 UTC (Sun) by mb (subscriber, #50428) [Link] (2 responses)

This technical reason doesn't apply here.
Rust dependencies are statically linked.
Update of a Rust dependency package requires all packages using it to be updated as well.

There are mechanisms to identify which dependencies are statically linked ("bundled") into a Rust application: https://crates.io/crates/cargo-auditable
That can be used to fix security issues on a distro level.

Agreeing on one specific distro wide dependency version is absolutely not necessary.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 17:19 UTC (Sun) by pbonzini (subscriber, #60935) [Link] (1 responses)

> This technical reason doesn't apply here.
> Rust dependencies are statically linked.

The fact that Rust dependencies are statically linked doesn't make things any easier. Bundled libraries are harmful for the same reasons (detailed for example at https://fedoraproject.org/wiki/Bundled_Libraries) in any language.

Without bundled libraries, if you have three different versions of a shared object, you rebuild those and you're done. If you have three different versions of a library that is statically linked, you rebuild whatever has a build dependency of the three static libraries and you're done.

But if anybody can bundle an arbitrary version of a library, you need to have a mechanism in place to track the bundling, because you can't follow a dependency chain.

See for example how Fedora does this:
* the rules: https://docs.fedoraproject.org/en-US/packaging-guidelines...
* the single cases: https://fedoraproject.org/wiki/Bundled_Libraries_Virtual_...

Allowing every Rust package to add an entry into the second table would be crazy.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 20:08 UTC (Sun) by zejn (guest, #116440) [Link]

Yes! And even if you bundle the library and link it statically, the code does not appear from thin air. It needs to be shipped in the distribution as source.

Debian build machines don't pull random code from the internet. I don't think upstream tools would think about copying the source of a rust library into their git, they just use cargo.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 9:43 UTC (Sun) by NAR (subscriber, #1313) [Link]

If a distribution is full of security holes, who's going to use it?

If a distribution is full of broken software, who's going to use it?

It might have made sense 20 and 30 years ago that distributions religiously kept single versions of libraries when (nearly) everything was written in C and the number of "interesting" software was in hundreds or thousands range. Nowadays it no longer makes sense, there are hundrends of thousands of softwares around written in languages which don't really care what's in a distribution (especially if the developers of said software work on e.g. MacOS).

There's no wonder the snap, flatpack, docker, curl |bash way of running/installing software is getting more popular.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 8:57 UTC (Mon) by taladar (subscriber, #68407) [Link] (13 responses)

If you only want one copy, you first need to put in the effort in Rust (and many other modern languages) to enable dynamic linking without the inherent performance and semantic loss that implies.

But independent from that Rust and similar languages should definitely not be limited to a single version globally for each library. Nobody is going to unify the entire Rust ecosystem on use one version of each library just to satisfy Debian requirements. Not to mention how do you imagine updates work in that world? Every maintainer switches over to the new version on exactly the same day? The entire distro sticks with old versions of everything until the last maintainer updates their dependency version?

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 11:24 UTC (Mon) by kleptog (subscriber, #1183) [Link] (10 responses)

> If you only want one copy, you first need to put in the effort in Rust (and many other modern languages) to enable dynamic linking without the inherent performance and semantic loss that implies.

No, that solves a slightly different problem. The fact that the code might be compiled multiple times and linked into different binaries is not the issue, C++ headers have had that issue since forever. It's that if you want to be able to rebuild the distribution from source, you would have to include the different versions of the source package required by every package in the distribution (the Desert Island test).

Now, you could argue that being able to recompile the entire Debian distribution from source without an internet connection is a stupid requirement in this day and age, but I don't think it's a stupid requirement. Maybe in the future it could be solved by shipping the entire (or abridged) Git history of Rust packages in the deb package so dependent packages can choose the version they need. But that's not how it works now. And I don't think it's surprising that there exists software that doesn't work with this requirement. Bcachefs is not a filesystem you want to use if you're living on a desert island.

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 1:28 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (9 responses)

> It's that if you want to be able to rebuild the distribution from source, you would have to include the different versions of the source package required by every package in the distribution (the Desert Island test).

And what's the problem with that? What stops Debian from including multiple versions of a dependency?

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 15:55 UTC (Tue) by kleptog (subscriber, #1183) [Link] (8 responses)

> And what's the problem with that? What stops Debian from including multiple versions of a dependency?

Nothing technical, OpenSSL is probably the most famous example. There are many libraries/programs that exist in multiple versions to assist with migrations.

What isn't sensible is to include an extra version of a dependency that is only used by a single package. That way lies madness. There is a trade-off to be made between adding extra versions and forcing everything to a single one. It requires human judgement and calm discussion. Not a single developer claiming their package is special.

So the packaging in Debian was under development in unstable, and it was broken. Well duh, it's called unstable for a reason. Ideally at some point it would have worked so the moment bcachefs was actually stable, the Debian packages would have been ready to go. I guess we'll never know now.

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 16:25 UTC (Tue) by koverstreet (✭ supporter ✭, #4296) [Link] (7 responses)

> So the packaging in Debian was under development in unstable, and it was broken. Well duh, it's called unstable for a reason. Ideally at some point it would have worked so the moment bcachefs was actually stable, the Debian packages would have been ready to go. I guess we'll never know now.

Nononononono :P

I keep hammering on this, because it's important. This attitude might be ok for any random user package where it's a minor inconvenience if you break it, but not for the filesystem. It's not a minor inconvenience if the filesystem breaks; it's the one component that absolutely has to work.

For the filesystem, the experimental label is a warning to users; it does _not_ mean that we're allowed to screw around and break things on purpose. You should consider the experimental label as "dry run mode", we haven't been able to test it as widely as we want so we know we're not finished fixing bugs, but we still do development as if it was a normal stable released filesystem like any other.

Importantly, we want to see that not just the code is stable but all the processes for supporting that code are in place and working BEFORE lifting the experimental label.

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 17:29 UTC (Tue) by MrWim (subscriber, #47432) [Link] (6 responses)

I realise you have difficulty hearing this: but bcachefs simply isn’t (yet) important. People are willing to make exceptions to process if it’s important enough. Bcachefs simply isn’t there. Ext4 and by extension E2fsprogs is.

That E2fsprogs got an exception is not surprising because ext4 *matters*. It matters more than process. Bcachefs-tools *is* just a random package as far as others are concerned.

I believe that your communication difficulty arises because you don’t understand that bcachefs is simply not a high priority to the people you’re communicating with. It’s just yet another package/patch to them.

I sincerely hope you succeed in getting bcachefs to the point that it matters too.

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 17:47 UTC (Tue) by pizza (subscriber, #46) [Link] (1 responses)

> I realise you have difficulty hearing this: but bcachefs simply isn’t (yet) important. People are willing to make exceptions to process if it’s important enough. Bcachefs simply isn’t there. Ext4 and by extension E2fsprogs is.

e2fsprogs has been around for over three decades, ie far longer than ext4 itself.

If ext4 required a hypothetical 'e4fsprogs' instead, would you also be arguing that shouldn't be considered "critical" when Debian started shipping kernels ext4?

FFS, if "filesystem recovery tools" aren't considered critical path, then WTF possibly could?

So what exactly *is* in the cards, then?

Posted Sep 3, 2025 9:10 UTC (Wed) by farnz (subscriber, #17727) [Link]

I would argue that the only critical filesystems (defined as those where you can expect Debian to make exceptions to normal policy, rather than the norm of can't expect an exception) are those recommended by debian-installer. At the moment, that's ext4 only, so only ext4's support programs are also critical.

In other words, xfsprogs, btrfs-tools and similar are not critical, because the users of those filesystems are doing something non-default, and should be thinking about what they're doing. e2fsprogs is critical, because someone who's following Debian's recommendations will be using it.

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 19:25 UTC (Tue) by koverstreet (✭ supporter ✭, #4296) [Link] (2 responses)

There's no need to bring in this popularity contest thinking; we're not talking about things that affect the rest of the system.

We're just talking about perfectly avoidable screwups.

And no, e2fsprogs got their exception because they had a package maintainer who was willing to slow down and do the required legwork on Debian policy and take into account the upstream needs, i.e. testability and reliable bugfixes.

Like I said, I've had to tell distro people to slow down multiple times; "slow down if you think you can't do it right" is a perfectly reasonable position. Blindly charging ahead with things that are only important for stable when we're not ready and prioritizing distro rules over shipping working code got us into the Debian mess; with the kernel, all we needed was for sane and consistent policy (i.e. prioritize keeping things working for the end user), like the rest of the kernel has, and calm reasonable conversations about priorities instead of dictating over the minutia.

Maybe you don't think bcachefs is important, but the users running it certainly do, most of the users running it that I've talked to are doing so specifically because they needed something more reliable - so it's my responsibility to see that it continues to be, and that does mean dealing with all sorts of issues and screwups as they arise.

So what exactly *is* in the cards, then?

Posted Sep 3, 2025 9:43 UTC (Wed) by paulj (subscriber, #341) [Link]

Kent... you can't fix the world.

Debian is its own ecosystem. Each distro generally is. They have their ways of doing things. They may sometimes seem wrong to you, but they have their reasons - which could extend far beyond your code and your concerns, and also stretch far across time. There may be deeply buried community reasons for some things being seemingly less efficient than they should be to you.

Let them do their thing.

Unless you want to become a DD, and spend many years building trust with others, demonstrating you understand all the relevant processes and the trade-offs behind them, and demonstrating you know how to persuade other DDs to change a process. Just let them do their thing. Keep an eye on out on what patches they apply to ship your code (if they package your code) - see what patches you can incorporate, or what deeper fixes you can make to your code to avoid some patch; help them if they ask for help and you can. But... let them do their thing, and don't go telling them you know their ecosystem better and how to (paraphrasing) "Fix their mess". Just don't do that.

Let them do their thing.

You go focus on bcachefs and your users, as you know you should, and just let the other stuff slide.

So what exactly *is* in the cards, then?

Posted Sep 3, 2025 10:35 UTC (Wed) by paravoid (subscriber, #32869) [Link]

No conversation about velocity ever took place. Disagreements about velocity, distro policies or whatever, were not the reasons the package was orphaned, not adopted to this day, and dropped from unstable. Kent's hostility and inability to work in the collaborative
environment that open source is, is. I hope that's evident just by looking at this LWN page when he is still talking about "screwups" to this day, and his refusal to apologize for conduct that is clearly hostile and unacceptable in our communities.

e2fsprogs' maintainer in Debian is, and has been for the past 20+ years, Theodore Ts'o, also ext4 upstream. "they had a package maintainer who was willing to slow down and do the required legwork [...] and take into account the upstream needs" is not untrue, but kind of weird way to put this, so I guess this is all just speculation on Kent's part without knowledge of the actual facts (despite the confidence with which it was claimed).

Generally speaking, it's been hard to keep up with this thread and try to fact-check claims that are... a creative approach to the truth, to say the least. "Alternative" facts and shifting the focus of the conversation elsewhere (e.g. getting kicked off Debian in a thread about getting kicked off the kernel) are well-known ways to exhaust everyone else you've ever disagreed with, leaving you alone to present your own version of the truth. I'd encourage everyone to approach with caution before forming an opinion.

So what exactly *is* in the cards, then?

Posted Sep 3, 2025 7:51 UTC (Wed) by taladar (subscriber, #68407) [Link]

You are looking at this the wrong way. If exceptions are needed for popular packages then something is wrong with the process that likely also affects less popular packages.

This is quite similar to the way e.g. Microsoft, Google or Apple sometimes use internal, undocumented OS APIs to solve a problem their public APIs can't solve and then complain when other people start reverse engineering and using that too for lack of alternatives.

If the official way does not work for everybody, not even for the popular packages that get lots of attention by everyone involved, something likely needs to change.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 17:25 UTC (Mon) by kreijack (guest, #43513) [Link] (1 responses)

> But independent from that Rust and similar languages should definitely not be limited to a single version globally for each library. Nobody is going to unify the entire Rust ecosystem on use one version of each library just to satisfy Debian requirements.

The solution is simple and it is called semantic versioning

>Not to mention how do you imagine updates work in that world? Every maintainer switches over to the new version on exactly the same day? The entire distro sticks with old versions of everything until the last maintainer updates their dependency version?

This already happens now:

$ apt search libbluray | egrep libbluray
[...]
libbluray2/unstable,now 1:1.3.4-1+b3 amd64 [installed,automatic]
libbluray3/unstable 1:1.4.0-3 amd64

At the same time we have two different library packages versions. The first (libbluray2) follows the 1.3.x versions, the second one (libbluray3) follows the 1.4.x versions. The assumption here is that the 1.3.x revisions are only bug-fixes backward compatibility; the same for the 1.4.y. So it is enough to have the latest 1.3.x and 1.4.x. (between which a non backward compatibility is allowed),

Now I saw that "bindgen" reached the version 0.69.0 on Nov 1st, 2023 [2] and version 0.66.0 on June 14 2023 [1]. Several 0.x.y followed. To me it seems that having 3 major revisions in 3 months are too much for a tool designed to be a base by other programs. With a so high release frequency the likelihood of an error is high.

Let me to clarify, you can work with those "high" frequency releases. This is not a error. Of course this put a stable/conservative distribution (like debian) in difficulty being unable to follow this rate of revisions.

And the solution of "don't worry, because we static link" *is not a sane solution* because in case of a bug in a library this would require upgrading all the involved application instead of a library.

In conclusion or a) we decide that bcachefs is not stable enough, and then an high rate of revisions (of the tool and its dependencies) is allowed/possible. However this is incompatible with the debian model; or b) we decide that bacachefs is stable, then it has to lower the revision rate of the tool and its dependencies (which may means to avoid some dependencies not stable enough).

I think that both the choice are good ones. What is bad is pretending to be stable and having high level of revision (of the tool and their dependencies); this will conflict with "stable/conservative" model (put debian and/or kernel here).

My 2¢: bcachefs is complex enough; why adding further complexity using rust for the user space tool ? Even tough rust is an excellent language, the C is still more portable... If the core is bring bcachefs to mainstream, having bcachefs-tools in rust is only a distraction.

[1] https://github.com/rust-lang/rust-bindgen/releases/tag/v0...
[2] https://github.com/rust-lang/rust-bindgen/releases/tag/v0...

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 17:35 UTC (Mon) by mb (subscriber, #50428) [Link]

bindgen is pretty much a special case rather than an ordinary dependency.
It's a build tool.
Just like any other build support script that is present/vendored in the upstream sources.

I don't see why we couldn't just simply vendor whatever bindgen upstream uses with every Debian package and be done with it.
A system wide update/upgrade of bindgen is just not a thing that makes any sense.

Debian should just drop the bindgen .deb package.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 19:26 UTC (Sun) by highvoltage (subscriber, #57465) [Link] (10 responses)

Some reflections on my side. It's been a year since I orphaned bcachefs-tools from Debian and I have no regrets.

I read all your comments above where you state things like understanding how software development lifecycles work, but I don't believe that you do.

"a few months out of date" isn't that long a period of time in the Debian world, and you refer to 2 years as a long time? Our standard support for a release is 3 years, and between companies like Freexian and Canonical, extended support can go up to 7-10 years. I certainly consider the latter to be a bit on the extreme end, but still, a filesystem should be supportable for at least a normal release cycle.

If you can't get that right for bcachefs (and I really wish you all success) then I guarantee you that bcachefs won't have any future.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 19:39 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (8 responses)

I have no regrets either, so I guess that makes two of us :)

You really were just getting ahead of yourself by putting the Debian stable concerns first. First priority needs to be supporting current users, and we were nowhere near ready for that.

For supporting Debian stable, the thing to do will be: wait until we know what the bcachefs backport situation looks like, and gather input from other filesystem developers who have dealt with this in the past and ask what kinds of issues they've seen and what they'd do again or do differently. XFS folks would be the first people I'd ask.

Things are ok now; Debian users know they have to build from source (or use a well maintained PPA; there might be one but I'm not sure).

There's no need to rush.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 12:27 UTC (Mon) by kragil (guest, #34373) [Link] (7 responses)

My guess is that responses like this are the reason you not allowed to contribute to the kernel and to other projects anymore.

Try something totally radical: Respect people and ask nicely for help. Nothing else. Don't tell them what they have to do, don't call them names. And above else try to be a nice guy all of the time, not the opposite. Millions of people are able to do it. You certainly have it in you, I hope.

Think about it

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 13:08 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (6 responses)

My job isn't to always be the nice guy, though. My job is to ship code that people can trust, work with and grow my community, set standards for how we do things, and make this sustainable in the long run.

I have many, many demands on my time. If someone is being disruptive, causing problems, taking up too much of my time and not learning, at some point I have to tell them to get lost - or I'll be shirking my other responsibilities.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 13:32 UTC (Mon) by ferringb (subscriber, #20752) [Link]

You literally just described why bcachefs is getting the boot, specifically from the non Kent standpoint.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 14:59 UTC (Mon) by kragil (guest, #34373) [Link] (4 responses)

If you really think you grow your community by behaving this way, there is nothing anybody can do for you. bcachefs is probably a dead filesystem walking. Sad, but true.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 15:19 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (3 responses)

No, on hard technical challenges the important thing is to keep the focus on the core technical issues.

Keep in mind there is a history of ambitious Linux filesystems not living up to expectations; the current most successful Linux filesystem comes from SGI, and since coming to Linux, the community has repeatedly burned out maintainers.

Given that, if this is to get done it can't keep being a free for all. Small, focused and professional is the way to go.

My job is not too give equal weight to everyone's opinion. My job is to get the job done and work with people who will help to get the job done.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 15:47 UTC (Mon) by paulj (subscriber, #341) [Link] (2 responses)

> Small, focused and professional is the way to go.

My advice would be to stop replying on forums. Even with the best of intentions on all sides, there will be misunderstandings. Go off, go make bcachefs great for your users. If you succeed in that, in time a solution will come to upstreaming (which may be that others take care of that work for you).

Just go and focus on your code and users for a good while, and let the heat subside.

Stay off reddit, stay off Phoronix, stay off LWN, etc., etc. Best thing you can do for yourself, your project and your users for now.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 16:11 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (1 responses)

No, going ivory tower isn't the answer either :)

That's the way the kernel has generally gone, but it's not good when developers and maintainers are disconnected from the needs and experiences of the userbase.

Communicating priorities, direction, gathering input, working with people - that's all stuff I do on a daily basis. Normally it doesn't take much time, but when things blow up I've found it important to send a clear consistent message - otherwise the drama takes over all the spaces that we normally use for work.

(But I am looking forward to getting back to coding after all this dies down. Hopefully I'll get some more work done on the rebalance patchset after lunch.)

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 16:30 UTC (Mon) by paulj (subscriber, #341) [Link]

I didn't phrase it clearly enough: I meant to stay away from forums (Phoronix, Reddit, LWN, etc.) on any of these Linus/upstream discussions. I did not mean to stay away from your users when it comes to maintaining and developing bachefs. I.e.:

> Just go and focus on your code and users for a good while, and let the heat subside.

Do that. Focus on the code and your users. Stay away from all the other online forum discussions about Linus/upstream.

I understand where you're coming from, and I can mostly understand why from your perspective your responses seem in each case to be reasonable responses, but regardless of how you see it, it is an obvious fact that they're not being received in a way that helps you wrt upstream kernel. Go do:

> Small, focused and professional is the way to go.

Focus on the code and users, forget the rest for a while (and force yourself to stay away from the rest!). And if you succeed wrt bcachefs, resolutions may become available. Please take advice. ;) And good luck. (I for one want to see bachefs succeed).

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 8:58 UTC (Mon) by taladar (subscriber, #68407) [Link]

A few months is an eternity if that is your iteration time for an experimental piece of software.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 8:20 UTC (Sun) by zejn (guest, #116440) [Link]

Debian subscribes to the religion of "Debian Packaging Policy", which puts the goal of only having one copy of same software higher than the software working absolutely bug-free. There is a *technical reason* to that rule: it makes updating software easier and you only need to update one copy to make all the other library users use updated version. This is great for a distributor and also great for security, which I, a Debian user, actually like. To an outsider, DPP unbundling rule may seem "because Debian said so".

You seem to subscribe to the religion of "sound technical decision making", which, honestly, for you as a provider of some software mostly means "hey this makes my life easier, I'm going to do more of this". "This" may be a dependency on a specific library and you'll require version X or maybe greater. It can also mean other things, and since you don't have a policy written down, to other people it often means "because he said so".

Debian will unbundle everything it can, because it makes sense for it to do so. You can provide your own package repository, if you want, it's trivial. A lot of projects do so, for various reasons.

> When we're dealing with critical system components, you cannot focus just on language and diplomacy and ignore the decisionmaking; that's ignoring our most basic responsibilities.

Why are you ignoring Debian's technical reasons for unbundling, then?

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 9:56 UTC (Sun) by paravoid (subscriber, #32869) [Link] (4 responses)

"I could have been more diplomatic" sounds like quite the non-apology. To be clear, are you apologizing for your past behavior in that email?

In terms of language and critical system components: there is a fair amount of published research on this topic. It does not agree with you. Language and respect are extremely important components. Fostering a culture of trust, respect and psychological safety is what makes systems safe. This is accomplished through open communication, clear language, demonstrating respect, encouraging participation, valuing contributions, contributors having no fear of retribution (among others). It has been proven time and again, in processes written in blood, as documented in (actual human life) safety reports -- whether it was Chernobyl, air crash investigations or other types of large scale accident portmosterms. Some of it captured in the field of "Crew Resource Management" (CRM) (in turn based on the principles of Cockpit Resource Management) , if you want to read more on this topic.

To put it simply: if I jumped on a plane and heard the pilot telling his first officer "stop wasting my time with this stupid bullshit so that I can get back to real work"... I'd probably ask to get off the plane.

But you do you. Just maybe stop stating that the reason bcachefs-tools is not in Debian (or the kernel, from what it looks like) is not a "broken release process". The reason is that you've alienated multiple collaborators on the Debian end with your behavior. That's a fact, not a subjective topic to have an academic discussion about.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 13:03 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (3 responses)

I don't subscribe to forced public apologies.

I believe in talking things out with the goal of understanding each other's perspectives and concerns, and mending actual working relationships.

I still haven't seen the Debian people giving adequate (or any) concern to reliability, QA, upstreaming of changes, etc. and I find this concerning.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 21:28 UTC (Sun) by josh (subscriber, #17465) [Link] (2 responses)

And many people have not seen you giving adequate (or any) concern to their own requirements, either. If you continue believing that everyone *except you* needs to change, you're going to continue to get disappointing outcomes like the one mentioned in this article.

You say "understanding each other's perspectives and concerns", but in general, it seems like you demand to be understood, without doing any understanding.

By way of example, a conversation about reliability and QA is a two-way conversation, in which you need to actually care about distribution integration and policy as something other than an obstacle. There are packages that care deeply about reliability and QA, that work with distributions on testing and CI to ensure that the configurations the distributions use get tested with a full gauntlet of upstream tests.

Conversations like that require not alienating people who might work with you. And if you believe that diplomacy doesn't matter and only technology should matter, you're going to keep alienating people. When you say things like "but for some strange reason no one I've talked to wants to take that on", you may wish to consider why that might be. If you make the incorrect assumption that all the reasons have nothing to do with you, you're going to come to incorrect conclusions about how to solve the problem.

It sounds, though, like many people have tried to discuss this with you, and they didn't have any luck, so this comment thread seems unlikely to tip the scales.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 15:04 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (1 responses)

> And many people have not seen you giving adequate (or any) concern to their own requirements, either. If you continue believing that everyone *except you* needs to change, you're going to continue to get disappointing outcomes like the one mentioned in this article.

The packaging issues were discussed extensively; the only concrete reason the Debian folks gave was security updates for dependencies. But that doesn't hold water, since Rust dependencies are statically linked, so the depending package needs to be respun anyways, and with bots already checking and notifying me if dependencies have security updates it's a solved problem.

> By way of example, a conversation about reliability and QA is a two-way conversation, in which you need to actually care about distribution integration and policy as something other than an obstacle. There are packages that care deeply about reliability and QA, that work with distributions on testing and CI to ensure that the configurations the distributions use get tested with a full gauntlet of upstream tests.

No, with bcachefs-tools QA and reliability was not a two way conversation; the folks on the Debian side weren't giving it and concern and were putting the packaging concerns first.

Hence the breakdown.

If I'd seen them putting effort into testing, QA, and the process concerns (making sure we can reliably get bugfixes out), things would have gone very differently.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 16:44 UTC (Mon) by paravoid (subscriber, #32869) [Link]

No matter how many times Kent will repeat this in this thread and elsewhere, the problem of "bcachefs-tools in Debian" is NOT a matter of "release process", Rust, distro policies, vendoring etc. He could be having these QA trade-off conversations with a maintainer. It is true that exceptions can be made if/when there is a good reason.

The problem is that *there is no maintainer* to have these conversations with. Past and many prospective contributors were driven away by Kent's conduct (attacks, profanities, smear campaigns, etc. - see above), and noone has touched this for a year. I speak for myself, one can make their own guesses for every other potential contributor.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 22:29 UTC (Sun) by ericc72 (guest, #41737) [Link]

I am not defending anything here, but regarding the "bullshit, make-work job" comment referenced in your post, that one sounded pretty cruel. I then started reading the PSA thread, and that comment came as a response to this:

> Has anyone volunteered to be the political advocate for bcachefs-tools
> bugfix releases in Debian?

To which Kent replied, "No, and nor would I recommend anyone else for that kind of bullshit,
make-work job."

In that context I get what Kent was saying and does not come across as a slight to maintainers. Not saying it was a good response or not, more that Kent didn't like the idea of having a political advocate involved.

I have been following a lot of this somewhat closely. I want to see bcachefs succeed, but I also know there has been a lot of damage done. Without taking any sides, there for sure is a lot of not seeing things "eye to eye." My hope is, somehow something can be worked out here. From a technical perspective, I am excited about what bcachefs could become. But I also understand the issues involved and all I can do is observe from the sidelines.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 3:34 UTC (Sun) by jmalcolm (subscriber, #8876) [Link] (21 responses)

> To correct the record: bcachefs-tools is not in Debian because Kent was impossible to work with

I have been critical of Kent so let me defend him here. My understanding of the issue with bcachefs-tools in Debian was that bcachefs required a newer version of Rust than Debian wanted. This is a technical issue and being uncompromising on a technical issue is completely different than a philosophical or process issue. Also, bcachefs is hardly the only project that has had dependency issues with Debian. For all its benefits, Debian is poorly suited to new and evolving technologies (in my view at least). Look at Wayland in Debian vs other distros for another example. Even Debian 13 ships with NVIDIA drivers that lack explicit sync which means Wayland will still not work for many people despite working well in other distros for some time already. I am on Kent's side here.

> Debian was not even close to the topic at hand, and yet you felt the need to bring it up

Absolutely. Even in a thread where the discussion was explicitly about how nice it is to see people taking the high road, Kent waltzes in and starts lobbing grenades. He is a passionate curator of other people's faults but I have never seen him confess to his own--even when confronted with significant evidence. If there is a problem, the blame lies elsewhere by definition in his world. Watching him burn the bridges that allow me to use bcachefs and then claiming to care about his users has really started to rub me the wrong way. Do what you want but stop telling us your choices are other people's fault. Kent fights for one person. That is more of a passion for him than his filesystem and that sucks (for me). Kent is the scorpion to my frog.

Bcachefs is a pretty great filesystem. I wish I could keep using it.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 3:41 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (20 responses)

> Absolutely. Even in a thread where the discussion was explicitly about how nice it is to see people taking the high road, Kent waltzes in and starts lobbing grenades.

Well, if I didn't bring it up someone else always does in these discussions.

I've never named names and I haven't been lobbing personal attacks, I'm just talking about the process issues bcachefs has faced, and there's a real common thread between that one and the kernel issues.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 14:22 UTC (Sun) by marcH (subscriber, #57642) [Link] (19 responses)

> Well, if I didn't bring it up someone else always does in these discussions.

Two persons could write the exact same words, yet it would have a totally different effect on the discussion and outcomes. That is unfortunately difficult to keep in mind for engineers too focused on hard science, correctness, test results, code, etc.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 14:30 UTC (Sun) by koverstreet (✭ supporter ✭, #4296) [Link] (18 responses)

So you think we shouldn't be focused on those things? :)

Delivering working code that solves real problems is why we're here, isn't it?

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 21:47 UTC (Sun) by marcH (subscriber, #57642) [Link] (1 responses)

We keep hearing about your efforts to digress, miss the point and miscommunicate. Now I see it first hand.

You can build the most perfect filesystem in the world, but it will keep existing in a vacuum if you cannot temporarily put yourself in the shoes of people who may have some disagreement(s) with you. That's required to communicate. It does not imply agreeing.

What would be great is partnering with people with better communication skills who could "bridge the gap". Those persons are rare. This one would have to understand enough of the technical details. BTW that's how most successful projects work: by combining a large spectrum of skills from different people. Nothing unusual and it's been mentioned before in this context.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 22:04 UTC (Sun) by marcH (subscriber, #57642) [Link]

> BTW that's how most successful projects work: by combining a large spectrum of skills from different people.

Forgot to say: ... combining different skills from different people _who are all reasonably aware of what they are good and bad at_ and can hear some feedback. There are some skills that you cannot just "combine": to be part of some project/team, everyone must have some basic teamwork skills.

There are jobs where you can control everything alone and you can even create a filesystem that way. But _distributing_ it on a large scale is a very different story.

So what exactly *is* in the cards, then?

Posted Aug 31, 2025 21:52 UTC (Sun) by mjg59 (subscriber, #23239) [Link] (15 responses)

> Delivering working code that solves real problems is why we're here, isn't it?

By that metric, you've failed. Bcachefs is unlikely to be meaningfully supported in mainstream distros, massively limiting the number of users you can deliver that working code to. If the most important thing to you is delivering that code, not merely writing it, you need to do accept that the approach you've taken has not been a success.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 0:41 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (14 responses)

we're already in a bunch :)

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 1:08 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (13 responses)

But that's given the previous status quo of bcachefs being maintained in the upstream kernel, right? It being out of tree adds load to distro maintenance, and mainstream distros are by and large unenthusiastic about carrying significant out of tree codebases in their kernel - Fedora has a hard policy against it, Debian are shocking likely to do it, Ubuntu would if they had commercial reasons to but ZFS already ticks those boxes, I don't know enough about suse policy to have an informed opinion there.

Packaging it separately via dkms avoids placing that load on the kernel maintainers but also makes it pretty much impossible to use as an installation target and also now you have additional complexity with systems with secure boot enabled.

Every realistic outcome here increases friction for users of bcachefs, which is contrary to your desired goal. I'm happy to accept that the practical impact for people running things like Arch may be minimal, but that's a subset of the potential reach. And if you're ok with that, what was the point of trying to get an experimental filesystem into mainline in the first place? This whole unfortunate series of events could have been skipped and we'd be pretty much where we are right now, except bridges wouldn't have been burned and it'd be much easier to get a more mature bcachefs into a place where everyone could easily benefit from it.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 1:14 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (12 responses)

I'm ok with doing things slower if it means doing them right.

It's bizarre how many times I've had to tell people this! I really don't know why the idea of wanting to go a filesystem the slow patient methodical way is so strange to people :)

Seriously, right after it was merged I had so many meetings with the Fedora people where I was becoming slowly horrified by how fast they wanted to go and I kept trying to convey to them there was no need for that. It comes up all the time.

I try to move fast on debugging, hardening, all that good stuff. I am not rushing to have it in every installer tomorrow, one step at a time.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 5:07 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (11 responses)

> I'm ok with doing things slower if it means doing them right.

But a huge part of the problem here was that you kept pushing things during parts of the release cycle where they shouldn't have been pushed. When people asked you to slow down you carried on doing exactly the same thing. If you'd taken a slower approach to landing things during the merge window then there'd have been less need to land things in RC to avoid poor user experience, and there'd have been much less friction and maybe we'd still have a well maintained copy of bcachefs in the mainline kernel which would automatically end up landing in every major distro and be available to a larger number of users. And if that's not the approach you wanted to take then, again, why upstream it in the first place?

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 5:11 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (10 responses)

I was quite explicit: I am not pushing on rolling out to the distros any faster than we already are, I always push hard for staying on top of bugs and supporting users.

Because that's how we get something rock solid and bulletproof, by making it a priority at every step.

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 5:38 UTC (Mon) by mjg59 (subscriber, #23239) [Link] (9 responses)

Right, I understand that, but: in that case why the push for bcachefs to be merged into mainline at all, rather than waiting until it was at a point where it was a reasonable choice for mainstream distros?

So what exactly *is* in the cards, then?

Posted Sep 1, 2025 11:57 UTC (Mon) by koverstreet (✭ supporter ✭, #4296) [Link] (8 responses)

Well, based on where the code was at and past historical practice it looked like the right time. I don't think that was a bad decision.

bcachefs went upstream in a much later stage of development than btrfs or ext4: we weren't making breaking changes to the on disk format, and it had active users and things were looking reasonably stable for them.

Since it went upstream, there have been lots of stability and scalability issues that have been found and fixed, but we've done pretty well on data integrity issues. There were only two bad data loss bugs since going upstream; the upgrade/downgrade bug in 6.8 or 6.9 that interacted badly with the new vector clocks for split brain detection, and the subvolume deletion bug in 6.15. And both were debugged quickly and repair/recovery code rolled out quickly.

Now, 6.16 is looking very solid, and I think we've hit a full month since the last critical bug report, and all the scalability issues people have found have been fixed (people started throwing it on 100+ TB disk arrays a lot sooner than I expected!). So stabilization has gone well, users are now calling it "unkillable" and it survives in all sorts of situations (often involving absolutely garbage hardware or disaster scenarios like wiping one disk of a three disk array with no replication) where other filesystems don't.

So based on that, upstreaming should've been the right call.

Keep in mind, filesystems are stupidly massive projects: upstreaming when it was a reasonable default choice for major distros was never the plan, the plan has always been to incrementally stage the release to gradually wider and wider audiences so I don't get flooded with bug reports all at once. We did need that wider userbase when it went upstream; but 100x bigger was sufficient, not the entire world.

Where things have been really rough has been building an actual developer community around it. Redhat showed a lot of interest prior to upstreaming, but then they never committed in a serious or meaningful way. I was starting to receive patches and see interest from the broader Linux filesyste community, but that evaporated when the drama over pull requests became an every-other-week thing with regular threats from Linus to remove it from the kernel.

So I ended up still doing 90% of the work, working stupidly long hours to keep up with bug reports from a userbase that had shot up by probably two orders of magnitude. Take a guess what my life has been like for the past two years :)

Now, things are looking stable; I still have some things to get through before I can remove the experimental label, but not much. But I really want bcachefs to be at feature parity with ZFS for most people before I can really ease up, so I have another year or two of solid grinding.

That mean finishing erasure coding, finishing online fsck, failure domains for managing large numbers of drives, a whole bunch of management stuff. Not even really thinking about send/recv yet.

So now, it probably won't go back upstream until it's well and truly finished (insert meme about "the one filesystem to rule them all"). Fortunately I'm in a much better position w.r.t. funding and finances than I was two years ago, and we've gotten a ton more real world battle testing, and the community has grown substantially, and the remaining work is pretty well sketched out.

Life goes weird places sometimes.

So what exactly *is* in the cards, then?

Posted Sep 2, 2025 22:09 UTC (Tue) by sheepdestroyer (guest, #54968) [Link]

I'm the type of user that had been eagerly waiting for years, for bcachefs to go upstream and lose the experimental tag, before migrating everything.
I hope it stays in somehow.

So what exactly *is* in the cards, then?

Posted Sep 5, 2025 16:46 UTC (Fri) by mmechri (subscriber, #95694) [Link] (6 responses)

@Kent: You’ve made it clear that you believe the kernel development rules/processes are inadequate for bcachefs. That’s your prerogative. But surely, given how long you’ve been around, you knew that long before submitting bcachefs for mainline. Given this, why did you submit it for mainline at all? Did you expect that bcachefs would be exempted from following those rules/processes? This isn’t a rhetorical question, I’m genuinely trying to understand your thought process.

> So now, it probably won't go back upstream until it's well and truly finished

This kind of implies that Linus will one day start accepting your bcachefs PRs again. Is it something that he confirmed to you?

So what exactly *is* in the cards, then?

Posted Sep 5, 2025 19:16 UTC (Fri) by koverstreet (✭ supporter ✭, #4296) [Link] (5 responses)

The kernel development process as it is normally applied would've been fine for bcachefs: like I mentioned elsewhere, I started perusing pull requests from other subsystems and I was actually legitimately surprised to see that it looks like I've been stricter with what I consider a critical bugfix than other subsystems. (While still in the experimental phase I do accept a slightly higher risk of (non serious!) regressions that I will post experimental so that I can prioritize throughput of getting bugfixes out; that's why I was surprised.)

Other subsystems will absolutely send features outside the merge window if there's a good reason for it; I even saw refactorings go in for XFS during rc6 or rc7 recently.

It's normally based on just common sense and using good judgement, balancing how important a patch is to users vs. the risk of regression. That should take into account QA processes, history of regressions in that subsystem (which tells us how well those QA processes are working), how sensitive the code is, and how badly the patch is needed. And when there's concerns they're talked through; things break down when people start dictating and taking an "I know better, even though I'm not explaining my reasoning" attitude.

The real breakdown was in the private maintainer thread, when Linus had quite a bit to say about how he doesn't trust my judgement based on, as far as I can tell, not much more than the speed with which I work and get stuff out. That speed is a direct result of very good QA (including the best automated testing of any filesystem in the kernel), a modern and very hardened codebase, and the simple fact that I know my code like the back of my hand and am very good at what I do.

I've been working in storage for going on 20 years at this point, and I've always been the one ultimately responsible for my code, top to bottom, from high level design all the way down to responding to every last bug report and working with users to make sure that things are debugged and resolved thoroughly and people aren't left hanging. People are still running, and like and trust, code that manages their data that I wrote when I was 25, and there's a bunch of people who are getting their kernel from my git repository - and for a lot of people it's explicitly because they've lost data to our other in-kernel COW filesystem and needed something more reliable, and they have found that bcachefs delivers. I don't know anyone in the filesystem world with that kind of resume.

> This kind of implies that Linus will one day start accepting your bcachefs PRs again. Is it something that he confirmed to you?

We both explicitly left the door open to that in the private maintainer thread, although on my end it will naturally be contingent upon having better processes and decisionmaking in place.

So what exactly *is* in the cards, then?

Posted Sep 5, 2025 22:40 UTC (Fri) by marcH (subscriber, #57642) [Link] (3 responses)

> I was actually legitimately surprised to see that it looks like I've been stricter with what I consider a critical bugfix than other subsystems.
> ...
> I even saw refactorings go in for XFS during rc6 or rc7 recently.

Surprising, can you please share some commit IDs?

> It's normally based on just common sense and using good judgement, balancing how important a patch is to users vs. the risk of regression.

The most important points seem to be missing from that list: size and nature of the changes. For both risk and maintainer bandwidth reasons.

If a "critical bug fix" has a non-negligible risk of regression, then either there's a clear divergence on the definition of a "critical bug fix", or the whole feature should be temporarily disabled (cause it has no bug fix simple enough for an RC phase). Or just filed and advertised, e.g. "don't use version X".

> (While still in the experimental phase I do accept a slightly higher risk of (non serious!) regressions that I will post experimental so that I can prioritize throughput of getting bugfixes out; that's why I was surprised.)

I think I've been noticing a bit of dissonance on that "experimental" topic...

- Either a significant number of bcachefs people use Linus' mainline and trust it with their data. Then that branch is not really "experimental" any more (whatever the label says), and no large change should ever be submitted in the RC phase but only small, "critical bug fixes"
- Or, it really is still "experimental", users should not trust that mainline branch, and then there is no emergency to fix problems in it! Because users shouldn't trust anyway. It's "experimental" after all.

In BOTH cases, no large change should ever be submitted in the RC phase! I mean, in neither case is any time-consuming _process exception_ needed.

> I've been working in storage for going on 20 years at this point, and I've always been the one ultimately responsible for my code,...

That sounds like 20 years of filesystem experience and 0 year experience of not being the boss?

Learning is hard, unlearning is much harder. Unlearning complete control seems crazy hard.

> things break down when people start dictating and taking an "I know better, even though I'm not explaining my reasoning" attitude.

Maintainers don't really have time to explain; the onus is on the submitter to make them understand and build trust. Whatever the perception is, using words as "dictating" can only backfire. Looks like it does. Maybe the submitter does not communicate well and should try harder. Maybe the maintainer is not smart enough or does not have enough time. Then the submitter should fork (and maybe come back later). Maybe both sides have issues.

So what exactly *is* in the cards, then?

Posted Sep 6, 2025 19:43 UTC (Sat) by koverstreet (✭ supporter ✭, #4296) [Link] (2 responses)

> Surprising, can you please share some commit IDs?

Try git log v6.16-rc1..v6.16 -- fs/xfs

> The most important points seem to be missing from that list: size and nature of the changes. For both risk and maintainer bandwidth reasons.

> If a "critical bug fix" has a non-negligible risk of regression, then either there's a clear divergence on the definition of a "critical bug fix", or the whole feature should be temporarily disabled (cause it has no bug fix simple enough for an RC phase). Or just filed and advertised, e.g. "don't use version X".

There was ~0 risk of regression with the patch in question.

bcachefs's journalling is drastically simpler than ext4's: we journal btree updates and nothing else - it's just a list of keys. For normal journal replay, we just sort all the keys in the journal and keep the newest when there's a duplicate. For journal_rewind, all we do is tweak the sort function if it's a non-alloc leaf node key. (We can't rewind the interior node updates and we don't need to, which means alloc info will be inconsistent; that's fine, we just force a fsck).

IOW: algorithmically this is very simple stuff, which means it's very testable, and it's in one of the codepaths best covered by automated tests - and it's all behind a new option, so it has zero affect on existing operation. This is about as low regression risk as it gets, and the new code has performed flawlessly every time we've used it.

> - Either a significant number of bcachefs people use Linus' mainline and trust it with their data. Then that branch is not really "experimental" any more (whatever the label says), and no large change should ever be submitted in the RC phase but only small, "critical bug fixes"

No, you've got it backwards. The experimental label is for communication to users, it's not for driving development policy.

We ALWAYS develop in the safest way we practically can, but we do have to balance that with shipping and getting it done. Getting too conservative about safety paralyzes the development process, and if we slow down to the point that we're not able to close out bugs users are hitting in a reasonable timeframe or ship features users need (an important consideration when e.g. we've got a lot of users waiting for erasure coding to land so they can get onto something more manageable, robust and better supported), then we're not doing it right.

OTOH, there's generally no need to hair split over this, because if you're doing things right, good techniques for ensuring reliability and avoiding regressions are just no brainers that let you both ship more reliable code and move faster: if you strike a good balance, most of the techniques you use are just plain win/win.

E.g. good automated testing is a _massive_ productivity boost; you find bugs quicker (hours instead of weeks) while the code is in your head. Investing in that is a total no brainer. Switching from C to Rust is another obvious win/win (and god I wish bcachefs was already written in Rust).

Work smarter, not harder.

But one of the key things we balance in "fast vs. safe" is regression risk, and that does vary over the lifecycle of a project. Early on, you do need to move quicker: you have lots of bugs to close out, features that may require some rearchitecting, so accepting some risk of regression is totally fine and reasonable as long as those regressions are minor and infrequent compared to the rest of the bugs you're closing out (you want the total bugcount to be going down fast) and you're not creating problems for yourself down the road or your users: users will be fine with that as long as you're quickly closing out the actual issues they hit. I eyeball the ratio of regression fixes to other bugfixes (as well as time spent) to track this; suffice it to say regressions have not generally been a problem. (The two big ones that we were bit by in the 6.16 cycle were pretty exceptional and caused by partly by factors outside of our control, and both were addressed on multiple levels - new hardening, new tests - to ensure bugs like that don't happen again).

The other key thing you're missing is: it's a filesystem, and people test filesystems by using them and putting their data on them.

It is _critical_ that we get lots of real world testing before lifting the experimental label, and that means people are going to be using it and trusting it like any other filesystem, and that means we have to be supporting it like any other filesystem. "No big changes" is far too simple a rule to ever work - experimental or not. Like I said earlier, you're always balancing regression risk vs. how much users need it, with the goal being ensuring that users have working machines.

There's also the minor but important detail that lots of users are using bcachefs explicitly because they've been burned by another COW filesystem that will go unnamed (as in, losing entire filesystems multiple times), so they're using bcachefs because even at this still slightly rough and early state, the things they have to put up with are way better than losing more filesystems.

That is, they're using bcachefs precisely because of things like this: when something breaks, I make sure it gets fixed and they get their data back. Ensuring users do not lose data is always the top priority. It's exactly the same as the kernel's rule about "do not break userspace". The kernel's only function is to run all the other applications that users actually want to run: if we're breaking them, we're failing at our core function. A filesystem that loses data is failing at its core function, and should be discarded for something better.

> That sounds like 20 years of filesystem experience and 0 year experience of not being the boss?

Well, if everything comes down to authority and chains of command now then maybe kernel culture is too far gone for filesystem work to be done here. Because that's not good engineering: good engineering requires an inquisitive, open culture where we listen and defer to the experts in their field, where we all learn from and teach each other, and when there's a conflict or disagreement we hash it out and figure out what the right answer is based on our shared goals (i.e. delivering working code).

> Maintainers don't really have time to explain;

That's a poor excuse for "I don't have time to be a good manager/engineer".

In engineering, we always have to be able to justify our decisionmaking. I have to be able to explain what I'm doing and why to my users, or they won't trust my code. I have to be able to explain what I'm doing and why to the developers I work with on the bcachefs codebase, or they'll never learn how things work - plus, I do make mistakes, and if you can't explain your reasoning that's a very big clue that you might be mistaken.

So what exactly *is* in the cards, then?

Posted Sep 7, 2025 3:04 UTC (Sun) by marcH (subscriber, #57642) [Link] (1 responses)

> Try git log v6.16-rc1..v6.16 -- fs/xfs

Please be specific; I just did and I found nothing shocking. The commits with "refactor" or "factor" in their name seemed very trivial, even I could make sense of them.

> There was ~0 risk of regression with the patch in question.

I was speaking in general, not about any particular patch in question. I don't even know which patch you're referring to.

> No, you've got it backwards. The experimental label is for communication to users, it's not for driving development policy.

I think you missed the point I was trying to make. I'm not sure you really tried.

> But one of the key things we balance in "fast vs. safe" is regression risk, and that does vary over the lifecycle of a project.

Yet another wall of text full of things that make sense and that I tend to agree with, but I really can't relate much of it with the points I was trying to make. This is not communicating, just speaking. And I'm amazed you have time left to write code after digressing and repeating yourself so much in obscure corners like this one. Indeed, burn out must not be far away. Unless there's a lot of copy/paste?

> where we all learn from and teach each other,

I have not read everything, very far from it but I don't remember you "learning" much. Could you name one significant and non-technical thing that you've learned during all this drama and will try to do differently going forwards? Trying to be absurdly clear: an answer to such a question (if any) should not say _anything_ about others, only about yourself.

So what exactly *is* in the cards, then?

Posted Sep 11, 2025 1:06 UTC (Thu) by deepfire (guest, #26138) [Link]

One person speaks about technical details and impersonal principles of communication and organisation.

The other goes as far as employing mind reading and generally positions themselves as a judge of character.

Someone clearly needs to get off the high horse.

So what exactly *is* in the cards, then?

Posted Sep 15, 2025 11:12 UTC (Mon) by paulj (subscriber, #341) [Link]

> The real breakdown was in the private maintainer thread, when Linus had quite a bit to say about how he doesn't trust my judgement based on, as far as I can tell, not much more than the speed with which I work and get stuff out. That speed is a direct result of very good QA (including the best automated testing of any filesystem in the kernel), a modern and very hardened codebase, and the simple fact that I know my code like the back of my hand and am very good at what I do.

Kent, do you realise the implicit message you are sending to other kernel people when you write things like this? You are somewhat implicitly saying that the kernel development process is generally much slower than your process, cause others do not have good code, don't have good testing, and they don't know the code well.

I am sure that's not how you intend it, but this is the kind of message you send to others when you blow your own trumpet in such ways in comms to peers and to longer standing kernel people - whether you are explicit or subtle in it. You are signalling that you consider yourself superior in such descriptions and ALSO implicitly in how you argue for exceptions again and again, even when maintainers with the final say have told you you will not get an exception at this time, particularly if you then point at other exceptional cases that you think you are better than.

Can you understand how this might rub others up the wrong way? Have you ever had to work with someone who regularly, through whatever implicit signals, makes it clear they think they are superior? Do you know how off-putting that can be to others?

I beseech you, yet again, to take a long break from engaging in comment threads here on LWN, or on Phoronix, or Reddit, etc., and also take a break from engaging with other kernel devs, and just go and focus on your code and making it great for your users. Refrain from making comparisons to other developers or their code or engineering practices - in any way, however subtle.

Do that, make bcachefs undisputably awesome, let your code do the talking, and things will eventually come good again.

If you can't stay off comment threads, where you seem to - regularly or irregularly - drop misjudged clangers about how good you think you are, then the chances of things coming good aren't as good I fear.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds