|
|
Log in / Subscribe / Register

The same old arguments...

The same old arguments...

Posted Dec 7, 2025 4:22 UTC (Sun) by mirabilos (subscriber, #84359)
In reply to: The same old arguments... by Heretic_Blacksheep
Parent article: Eventual Rust in CPython

> There shouldn't be a handful of users on niche OS/hardware combos holding back a
> project that's used by tens of millions.

But they should!

This is how we get hobbyist OSes… like Minix and then Linux.

This is how we get hobbyist architectures, too.

And these is where the great enrichment of FOSS is, not in corporate “Enterprise” Linux.

With Rust… it begins with LLVM. The project that wants 300$/month so they can run CI instances for a mere fork of FreeBSD, which at that time wasn’t even all that different. And then Rust cannot even use LLVM proper, only its own patched version.

In Debian, we have a Policy against that. Which is, of course, ignored for where this money is.

Then, bootstrapping, then navigating the entire ecosystem (including the cargo LPM, which is a plethora of problems in itself)…

… and for what? For a language that doesn’t even support dynamic linking?


to post comments

The same old arguments...

Posted Dec 7, 2025 4:58 UTC (Sun) by josh (subscriber, #17465) [Link] (13 responses)

> But they should!

> This is how we get hobbyist OSes… like Minix and then Linux.

> This is how we get hobbyist architectures, too.

No, it's not. We get hobbyist OSes (and other hobbyist projects) because the people working on them put in the work to make them happen, not because they can press other developers into service to make changes work on a target they were not seeking to work on.

Getting others to commit to *keep your OS or architecture working for you* is a very, very big ask. The correct answer in almost all cases is "no, the developers of that OS or architecture need to do all the work to maintain it".

> And then Rust cannot even use LLVM proper, only its own patched version.

This is incorrect misinformation. Rust builds just fine with standard LLVM. The only patches it carries in its branch of LLVM are the same kinds of bugfixes and backports that other users of LLVM carry.

> For a language that doesn’t even support dynamic linking?

C++ generics don't support dynamic linking either, and the only reason C++ is vaguely considered to "support" dynamic linking is when interfaces pointedly avoid them. Even then, it's been through a few rounds of ABI breakage. That's not something we're looking to put our users through, for a feature that most people don't require. It's a useful feature, by all means, and I expect that we'll support it eventually, using a model similar to what Swift did. But it's not a dealbreaker, and it's never likely to be the default, just something that extends the set of things supported over a library ABI boundary to beyond what C supports.

The same old arguments...

Posted Dec 7, 2025 7:36 UTC (Sun) by mb (subscriber, #50428) [Link]

> > For a language that doesn’t even support dynamic linking?

>C++ generics don't support dynamic linking either

And Rust does support dynamic linking.

https://doc.rust-lang.org/reference/linkage.html

It's only that Rust crates typically won't make the C++ trade off to make that happen.

The same old arguments...

Posted Dec 7, 2025 8:09 UTC (Sun) by ssmith32 (subscriber, #72404) [Link]

Even more to the point: we got Linux because a big OS with many users _did not_ support every use case.

In fact, I would say the lack of support for niche platforms encourages hobby projects..

The same old arguments...

Posted Dec 7, 2025 9:39 UTC (Sun) by Sesse (subscriber, #53779) [Link] (5 responses)

> C++ generics don't support dynamic linking either

This is only true for a pretty narrow definition of “support”. It is true that _someone_ has to monomorphize the generic before it can be linked (since the only real experience AFAIK is either a VM or type erasure?); but that's true whether we're talking static or dynamic linking. Lots of C++ dynamic libraries expose functions involving these specializations; e.g., std::string is a generic (std::basic_string<char>) and libstdc++.so exposes a lot of functions related to it. In practice, you can upgrade libstdc++ pretty freely without anything related to vector, string, map, etc. breaking—but you can't easily change their internals, since they become effectively part of the ABI (like so many other things in C).

The same old arguments...

Posted Dec 7, 2025 18:43 UTC (Sun) by JoeBuck (guest, #2330) [Link] (4 responses)

Right; I expect that at some point the most commonly used generics in the standard library will have their implementations frozen enough so that a stable ABI can be produced and dynamic linking can be supported in Rust in more cases. This was done for C++ long ago.

The same old arguments...

Posted Dec 7, 2025 18:58 UTC (Sun) by josh (subscriber, #17465) [Link] (3 responses)

I doubt we'll ever stabilize the internal layout of anything more complicated than `Result` or `Option` (and even for those, doing so means giving up on any further opportunities for niche optimization, the mechanism by which types like `Option<&T>` is the same size as `&T`).

For something like `Vec` or `HashMap`, the most likely path to stabilization is an opaque pointer plus a vtable of methods.

The same old arguments...

Posted Dec 7, 2025 23:40 UTC (Sun) by JoeBuck (guest, #2330) [Link] (2 responses)

That would be crazily inefficient for Vec<i32> or other vector of atomic type. If the call is not inlined, the structure could be frozen, as it is for std::vector<int> in C++ when libstdc++ is in use. Likewise for string slice arguments.

The same old arguments...

Posted Dec 7, 2025 23:43 UTC (Sun) by josh (subscriber, #17465) [Link] (1 responses)

We could nail down slices easily enough. It might potentially be reasonable to give *some* direct access to `Vec`, since the triple of pointer, length, and capacity is the obvious implementation; that would allow efficient and vectorized access to the data. However, for instance, reallocation would likely still require a vtable call.

The same old arguments...

Posted Dec 9, 2025 19:41 UTC (Tue) by NYKevin (subscriber, #129325) [Link]

After careful consideration, I don't think Vec needs vtables for reallocation, because those vtables really should attach to Allocator instead of Vec. But we do need some ABI glue that currently does not exist:

* In the case of Vec<T> a/k/a Vec<T, Global>, Global is a well-known ZST that can be statically dispatched. No need for a vtable. This is the most common case, so ideally it should not be pessimized in order to support other cases (especially seeing as the global allocator can be replaced). The foreign code will need to call into Rust's global allocation routines, but you have to do that even in the (default) case where Global delegates to System, so that's unavoidable.
* In the case of Vec<T, A> where A is a ZST or never dropped, the ABI needs glue code to coerce the whole thing into Vec<T, &'static dyn Allocator>, and then the vtable logic lives in Allocator where it belongs. I'm assuming, of course, that we can also nail down the ABI of &dyn Trait, which is a whole other kettle of fish. But at least dyn Trait is explicitly designed to support dynamic dispatch - most of the technical choices have already been made.
* In the case of Vec<T, &'a A>, it's the same story but with a lifetime parameter. Not sure how well that translates over the ABI, but at least lifetimes add no extra gunk at runtime.
* In the general case, the Vec might own an allocator, which might not be a ZST. That coercion is more complicated because now the Vec itself is of unknown size (it directly contains the allocator's fields). I would be inclined to declare that as unsupported or at least out of scope for language-level support, in the interests of not overly pessimizing Vec<T, Global> to support a niche corner case. Probably it could still be supported at the library level by decoupling the ownership of the Allocator from the Vec, and instead passing Vec<T, &'a dyn Allocator>. But that conversion is complicated and unsafe, so maybe some kind of glue code would be helpful here as well. Or maybe this version of Vec really does need a vtable.

The same old arguments...

Posted Dec 7, 2025 20:56 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (2 responses)

> We get hobbyist OSes (and other hobbyist projects) because the
> people working on them put in the work to make them happen

Yes. But that only works if the upstreams accept such work and don’t not only put too many hurdles in for hobbyists and raise complexity but also actively throw sticks and stones into their paths and extort “protection” money (money to merge the patches).

The same old arguments...

Posted Dec 7, 2025 23:45 UTC (Sun) by josh (subscriber, #17465) [Link]

Certainly there's no call to make it intentionally harder. But resources like CI systems and maintainer time do have a cost. Framing that as "protection money" is disingenuous.

The same old arguments...

Posted Dec 8, 2025 22:56 UTC (Mon) by intelfx (subscriber, #130118) [Link]

> Yes. But that only works if the upstreams accept such work and don’t not only put too many hurdles in for hobbyists and raise complexity but also actively throw sticks and stones into their paths and extort “protection” money (money to merge the patches).

As I'm sure you are aware, in FOSS, every project has the fundamental moral right to "self-determinate": to decide what, if any, level of formal or informal support and guarantees it wishes to make.

You normally hear about this in context of projects having the moral right to provide no guarantees and no support: the proverbial "as is". However, this works both ways. If a project, such as Rust, wishes to hold themselves to a higher standard — such as requiring all code to pass CI before declaring a target is supported — **YOU CANNOT STOP THEM FROM DOING SO**.

Calling this "extorting protection money" is so disingenuous and hostile that you should honestly be ashamed of saying that.

The same old arguments...

Posted Dec 12, 2025 18:25 UTC (Fri) by anton (subscriber, #25547) [Link] (1 responses)

Getting others to commit to *keep your OS or architecture working for you* is a very, very big ask.
As someone who wants to keep the software I maintain working on minority hardware and software, I certainly won't use Rust in its current state for it. And reading some of the opinions expressed in the present discussion makes me happy that our project does not depend on CPython.

The same old arguments...

Posted Dec 12, 2025 20:22 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

Thanks, that’s a statement that’s nice to read.

----

I’m also always surprised how ⓐ people go from “please merge my patches and just try to not actively break my arch” to “forcing others to commit to keep your OS or architecture working for you”, and ⓑ why that would even be onerous.

I mean, I’m not in the habit of writing shitty code that fails on other CPUs or (unixoid, mostly, as the things I write tend to target unixoid) OSes.

The same old arguments...

Posted Dec 7, 2025 7:04 UTC (Sun) by interalia (subscriber, #26615) [Link] (6 responses)

> But they should!

> This is how we get hobbyist OSes… like Minix and then Linux.

But how were the creation of Minix and Linux caused by users on niche OS/hardware holding back a large project? What large project are we talking about, because I don't think the big commercial Unixes felt constrained by x86 nor did they create Minix/Linux.

It doesn't seem to me that either of them were written in order to support other people who were users of niche hardware. They were written by those niche hardware users themselves, which would also be the proposed solution if projects like Python, apt or Linux decide to drop an architecture.

The same old arguments...

Posted Dec 7, 2025 20:57 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (5 responses)

I argue that if “a large project” is “held back” by support for more architectures/systems/targets, then it’s both an unportable and a shitty project and definitely NOT something that should become a cornerstone of FOSS.

The same old arguments...

Posted Dec 8, 2025 8:45 UTC (Mon) by taladar (subscriber, #68407) [Link] (4 responses)

You have that the wrong way around. The "unportable and shitty" bit is the old hardware, that is literally why nobody builds or wants to support it anymore, because it fundamentally does something in a way that we figured out was a bad idea or at the very least different from everyone else for no good reason.

The same old arguments...

Posted Dec 8, 2025 8:52 UTC (Mon) by Wol (subscriber, #4433) [Link] (3 responses)

I think you've got it the wrong way round.

All too often the majority modern way is the WRONG way, but unfortunately it won the race to the bottom.

I can't speak for x86_64, but the 68000? the 32032? MUCH better chips, much better designed, they just couldn't make headway against the 80x86 line ...

Cheers,
Wol

The same old arguments...

Posted Dec 8, 2025 9:54 UTC (Mon) by anselm (subscriber, #2796) [Link] (2 responses)

I can't speak for x86_64, but the 68000? the 32032? MUCH better chips, much better designed, they just couldn't make headway against the 80x86 line ...

The story goes that the reason why IBM used the 8088 for the PC rather than the 68000 (which had already been available at the time) is that they didn't want PCs to be too powerful because they might have cannibalised sales of their minicomputer lines. A similar argument later kept IBM from introducing 80386-based PCs but then Compaq came out with one and the floodgates were open.

As far as the 68000 was concerned, it was certainly not for lack of trying on the part of the industry. At the time, various 68000-based computers like the Atari ST and Commodore Amiga were quite popular with home users but never made noticeable inroads in the business PC world (which was probably less to do with the technical merit of the platform(s) and more with terrible marketing and unwise product development decisions by their manufacturers). And of course the original Macintosh was 68000-based but the platform switched over to PowerPC and eventually x86 (and ARM) – much like early SUN-type Unix workstations were built around 680x0 chips before CISC fell out of fashion and the workstation makers all came up with their own RISC CPUs (SPARC, HPPA, …).

The same old arguments...

Posted Dec 8, 2025 10:31 UTC (Mon) by farnz (subscriber, #17727) [Link] (1 responses)

There's other parts to that story; the 68k had a 16 bit external data bus, where the 8088 had a mere 8 bit bus. This meant that the PC was a cheaper design, since it could reuse long-established 8 bit parts (and, indeed, if you look at the chips used in the IBM Personal Computer 5150 and the IBM System/23 Datamaster 5322 or 5324, you see a lot of overlap).

And, of course, the 32032 was a disaster zone of a chip. On paper, it was reasonable, but once you took the errata lists into account, it was awful, and you were better off with the 68000.

The same old arguments...

Posted Dec 9, 2025 11:27 UTC (Tue) by epa (subscriber, #39769) [Link]

Ah yes, and the 68008 (which also had an 8-bit data bus and could have been used to build a cheap m68k-based machine) didn't come out until 1982, too late for the IBM PC.

The same old arguments...

Posted Dec 7, 2025 9:28 UTC (Sun) by qyliss (subscriber, #131684) [Link]

> And then Rust cannot even use LLVM proper, only its own patched version.

This has not been true for years.

The same old arguments...

Posted Dec 7, 2025 10:10 UTC (Sun) by MortenSickel (subscriber, #3238) [Link] (1 responses)

>> There shouldn't be a handful of users on niche OS/hardware combos holding back a
>> project that's used by tens of millions.

>But they should!
>This is how we get hobbyist architectures, too.

No. Linux was initially written for 80386 that was pretty far from hobbyist in the early '90ies. As a student back then, I had a 8086 myself, dreaming if being able to buy an 80286. I had access to 80386 PCs at the university. (as well as a few other professional systems) Many of the hobbyist architectures today are the former professional architectures. (when I could take over and take home the HPUX workstation I had used at work around 2000, that felt pretty cool and I was looking into install linux on it, but it turned out that I had other things to do with my life, so it ended up in electronic waste)

The same old arguments...

Posted Dec 7, 2025 21:02 UTC (Sun) by mirabilos (subscriber, #84359) [Link]

It was hobbyist back then compared to the other Unix workstations. It was “the cheap PC”.

That it was still expensive in Europe doesn’t detract from the relative cheapness and hobbyist-ness.

I also only had an 8088 back in 1991. But the 80386 and systems with it had already been on the (american, I guess) market for years (1985 and 1986, respectively). And you kinda need one for Unix, the 80286 and below don’t have the infrastructure to support it easily. The m68k series, also a favourite by hobbyists at that time, did, so it was pure chance Torvalds did Intel first.

The same old arguments...

Posted Dec 8, 2025 1:29 UTC (Mon) by dvdeug (subscriber, #10998) [Link] (15 responses)

> This is how we get hobbyist OSes… like Minix and then Linux.

Like OSes built for the most mass-market CPU at the time? There are a lot of hobbyist OSes out there; Rosco is a new OS/system for the M68K. Its audience is fans of retrocomputing and the M68K, and it's not going to hit like Linux or Minix.

Linux on most of these architectures, besides x86 or ARM, have always been rare usages. Even the high-end versions are now weaker than any computer on the market, with the exception of S390.

> And these is where the great enrichment of FOSS is

In the handful of people who still have archaic hardware and are installing new operating systems on them? I'd rather bet on the hundreds of millions of new people who are playing around with their first computer and might be convinced to be FOSS programmers, who could lead FOSS for next forty years, rather than the thousands who want to run Linux on their ancient computers instead of doing anything forward looking.

Yes, we should work with people who want to do what they want on Linux. But hurting the mainstream to support a small minority is not a win, whether you consider popular support or the enrichment of FOSS.

The same old arguments...

Posted Dec 8, 2025 1:46 UTC (Mon) by mirabilos (subscriber, #84359) [Link] (14 responses)

A society is not measured in how it treats the masses; rather, it is measured in how it treats its minorities.

And lots of great things do eventually come from minorities.

I am also thinking of self-hosted systems, not those cross-compiled from a majority system.

The same old arguments...

Posted Dec 8, 2025 8:49 UTC (Mon) by taladar (subscriber, #68407) [Link] (12 responses)

You are not a persecuted minority because other people don't want to invest any more effort into supporting your niche hobby hardware in their mainstream software code bases.

The same old arguments...

Posted Dec 8, 2025 11:19 UTC (Mon) by moltonel (subscriber, #45207) [Link] (11 responses)

Don´t build a strawman, I didn't see anybody talking about persecution. The "how a society treats its minorities" insight is not just about treating minorities equally, but about how much community help they receive. For example how many places are made wheelchair-accessible.

Likewise, mainstream community projects are generally willing to do a bit of extra work for niche archs, but the cut-off point for "you're on your own beyond that point" is fuzzy, subjective, and worth debating. Michał Górny's comment in the original thread is pretty clear-thinking: asking for understanding/flexibility/help, but acknowledging that mainstream can't wait forever.

It does look like Rust support work for/by some niche archs got invigorated, partly thanks to this Python discussion. That's a good thing for everybody.

The same old arguments...

Posted Dec 8, 2025 13:37 UTC (Mon) by pizza (subscriber, #46) [Link] (10 responses)

> I didn't see anybody talking about persecution. The "how a society treats its minorities" insight is not just about treating minorities equally, but about how much community help they receive. For example how many places are made wheelchair-accessible.

First, "community help" is funded by taxes, and "wheelchair-accessible" places are made so because they are forced to do so as a condition of running a public business, not out of the goodness of their hearts. I might add that that accessibility directly results in higher prices for everyone else.

> Likewise, mainstream community projects are generally willing to do a bit of extra work for niche archs,

Generally, that "extra work" is "patches welcome" and increasingly, "supply testing/CI resources that can be relied upon".

The same old arguments...

Posted Dec 8, 2025 16:30 UTC (Mon) by moltonel (subscriber, #45207) [Link] (9 responses)

> First, "community help" is funded by taxes, and "wheelchair-accessible" places are made so because they are forced to do so as a condition of running a public business, not out of the goodness of their hearts. I might add that that accessibility directly results in higher prices for everyone else.

These facts sound like a rebutal, but I'm not sure of what ? Yes, some (not all) community help is tax-funded and/or legally mandated, and has a price for the overall community. And yet we still do it: we pass those laws, spend that money, and encourage these volunteers. Why ? Because we collectively decided that it was a good thing to do, whether for ethical or practical reasons. Societies keep adjusting how far community help can/should go, but there's a strong correlation between a healthy community and a helpful one.

> Generally, that "extra work" is "patches welcome" and increasingly, "supply testing/CI resources that can be relied upon".

Yes, though even "patches welcome" is not free: it costs reviewer time, ongoing maintenance, implicit commitment, etc. Every project is different: some don't accept patches, some will spend a lot of resources to help a single user.

In CPython's case, the argument is that remaining C-only has an ongoing cost, paid by the project to help minority platforms. That balance has shifted over time: 10 years ago, missing platform support was seen as Rust's problem and could legitimately prevent Rust adoption. Today more and more, it is seen as that platform's problem and dropping support has become the lesser evil.

The same old arguments...

Posted Dec 8, 2025 18:18 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> Yes, though even "patches welcome" is not free: it costs reviewer time, ongoing maintenance, implicit commitment, etc. Every project is different: some don't accept patches, some will spend a lot of resources to help a single user.

Agreed. One project I'm on is a "patches welcome" project for platforms we don't actively support (including WSL, MinGW, Cygwin, FreeBSD, etc.). We will review patches (as time affords), but we cannot guarantee that things won't break without contributed CI resources (and even then, they'll usually be "notification" as we don't have control over the machine(s) and cannot block paying customers on such open-ended things as "CI machine over there is down").

The same old arguments...

Posted Dec 8, 2025 19:51 UTC (Mon) by pizza (subscriber, #46) [Link] (7 responses)

> Societies keep adjusting how far community help can/should go, but there's a strong correlation between a healthy community and a helpful one.

Sure. But this is where calling "loose collection of software developers working on a project on an ad-hoc volunteeer basis and a vastly larger number of non-contributing users" a "community" breaks down.

In a "real" society/community, everyone has to explicitly opt in (if only by virtue of not leaving) and once in they have to continually pay (or otherwise contribute) on an ongoing basis (=~taxes) to receive those benefits. Non-compliance with those rules has real penalties are ultimately enforced by, well, literal force.

...A society/community cannot function with zero or purely one-sided obligations.

The same old arguments...

Posted Dec 8, 2025 20:20 UTC (Mon) by moltonel (subscriber, #45207) [Link] (6 responses)

You're reading too much into this simile, how the rules are(n't) enforced or where resources come from is beside the point. AFAIU, mirabilos's point is just that it's generally a good thing for groups to spend some resources helping weaker members. This applies at every level of human societies. Always within reason: the group won't help beyond its means, or if there really is no expected return.

The same old arguments...

Posted Dec 8, 2025 23:02 UTC (Mon) by pizza (subscriber, #46) [Link]

> Always within reason: the group won't help beyond its means,

It's all well and good to say 'groups should spend some resources helping weaker members', but the fundamental point here remains the simple fact that there are [nearly always] [vastly] fewer available resources than demands placed upon them.

The same old arguments...

Posted Dec 9, 2025 1:23 UTC (Tue) by dvdeug (subscriber, #10998) [Link] (4 responses)

I don't see people running antique hardware as weaker members. They're people who run Arm or x86-64 for normal usage, and work on other systems because it's fun. They likely have more Arm/x86 computing power sitting around than the average user.

The same old arguments...

Posted Dec 9, 2025 7:42 UTC (Tue) by mirabilos (subscriber, #84359) [Link] (3 responses)

Really not.

I use Thinkpad X61 as daily driver for Linux. That’s Core2Duo from 2007.

I use Thinkpad X40 as daily driver for BSD. That’s Pentium M from 2004. I can do everything I need except Firefox and Mu͒seScore on it.

I do have one Raspberry Pi 1… because I got it as a gift.

My home server is a pre-Spectre/Meltdown Pentium 233 MMX.

I use a “dumbphone” for telephoning… I also have an old smartphone, but mostly for GPS for geocaching and the likes.

You significantly overestimate what people need to run to have a good experience.

The same old arguments...

Posted Dec 9, 2025 8:03 UTC (Tue) by mjg59 (subscriber, #23239) [Link] (2 responses)

So you have a 64-bit x86 system that supports up to 8GB of RAM and is likely faster than any commercial RISC system that can be run without a ludicrous electricity bill. You don't *need* any alternative architectures - and I have enough junk under my desk that if that's the blocker on you running weird old stuff then I'll happily drag some over to Europe when I'm there next week and post them to you, and you can't even argue about it being a waste of hardware because right now I have several old laptops that are doing nothing.

I say this as someone still actively poking at Linux driver support for the Commodore CDTV, and trying to get Zorro III working under Amiga Unix. These are things I find fun to do. I would never ask anyone else to care in the slightest.

The same old arguments...

Posted Dec 9, 2025 9:05 UTC (Tue) by mirabilos (subscriber, #84359) [Link] (1 responses)

> So you have a 64-bit x86 system that supports up to 8GB of RAM and is likely

Yes, and people are calling it legacy and are wantink to remove support for it already.

It’s ridiculous, isn’t it?

The same old arguments...

Posted Dec 9, 2025 9:11 UTC (Tue) by mjg59 (subscriber, #23239) [Link]

In this context? No, Rust compiled code is going to be Just Fine on a Core 2 Duo.

The same old arguments...

Posted Dec 8, 2025 11:36 UTC (Mon) by dvdeug (subscriber, #10998) [Link]

> A society is not measured in how it treats the masses; rather, it is measured in how it treats its minorities.

Err, no, societies that have a minority in the lap of luxury on the backs of masses living in squalor don't get rated very high.

In this case, I feel like people who have Alphas and M68K and the rest of the hardware in question tend to be the technological elite, who have a modern computer to do their work on, and have already decided whether or not to be a part of the FOSS community. It's the young kids who we need to keep the community running for another 40 years.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds