|
|
Subscribe / Log in / New account

Woodruff: Weird architectures weren't supported to begin with

William Woodruff has posted a rant of sorts on the adoption of Rust by the Python Cryptography project, which was covered here in February.

What’s the point of this spiel? It’s precisely what happened to pyca/cryptography: nobody asked them whether it was a good idea to try to run their code on HPPA, much less System/390; some packagers just went ahead and did it, and are frustrated that it no longer works. People just assumed that it would, because there is still a norm that everything flows from C, and that any host with a halfway-functional C compiler should have the entire open source ecosystem at its disposal.


to post comments

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 16:27 UTC (Mon) by dxin (guest, #136611) [Link]

Who's gonna help the helpless
Sometimes we're all alone

So layering is no longer a thing?

Posted Mar 1, 2021 17:02 UTC (Mon) by dmoulding (subscriber, #95171) [Link] (16 responses)

Some of the assertions made border on absurdity.

If the coreutils authors were to say, "What do you mean you ported 'rm' to the Raspberry Pi <or insert any random platform here>? Did anyone tell you it would be okay to do that? Don't you know we can't test 'rm' on every possible niche architecture out there?"

There is a concept of layering in this industry. The assumption is that if you build your software on top of a given set of well-defined layers, your program will reliably work on any system, so long as those underlying layers have been made to work on those systems. It's a keystone of software engineering.

Many people built their software on top of Python believing that Python would be such a layer they could depend on. That included an ecosystem of Python libraries that make life as an application developer easier. If suddenly one day, part of Python itself were implemented in COBOL and applications that depend on Python stopped working with new Python releases, unless they ran on systems that have a fully functional COBOL toolchain, is that reasonable? Can the application developers be blamed because they made bad assumptions that they were depending on a layer that's not as portable as they thought it would be?

It's not a bad assumption, it's reliance on the keystone of software engineering that if everyone at every layer of the system adheres to well-established interfaces, we get portable software, and that benefits everyone.

Disclaimer: I have zero skin in this particular game because I don't generally use the libraries or languages at issue.

So layering is no longer a thing?

Posted Mar 1, 2021 17:18 UTC (Mon) by nix (subscriber, #2304) [Link] (1 responses)

There is a concept of layering in this industry. The assumption is that if you build your software on top of a given set of well-defined layers, your program will reliably work on any system, so long as those underlying layers have been made to work on those systems. It's a keystone of software engineering.
Yes... except that there are some things that simply are architecture-dependent, and compilers that emit native code (like LLVM) are always going to be one of those things and will always require work to keep working on obscure architectures. C and C++ also come with extra fun because it's very easy to get your program dependent on things like the endianness of machine words, the size of pointers or the signedness of char, and now you have to routinely test on crazy platforms like mingw64... and the testing matrix explodes even more. This makes testing all combinations routinely impossible: all you can do is exhaustively test on a few platforms and do most-common-case testing on the rest, and hope (and perhaps occasionally do bigger test runs on one or another obscure platform).

The stuff I work on these days (libctf) is minimally endianness-dependent (it writes in native endianness and flips endianness at file open time) and it stores a lot of integers in pointers, so is dependent on integer and pointer sizes (because that's how you store integers without huge amounts of extra memory allocation in the libiberty hashtab)... and these two things alone multiply the test matrix something like 16-fold, though some of them can be covered via cross-compilers. Needless to say a full test (including tests of things like whether the i18n machinery has suddenly broken, shared-versus-static, all-targets versus no-targets, tests of all intermediate commits, and tests of make check on all available targets, compared against the start and end of one patch series, and then compilations of a nice big pile of other packages searching for surprising linker warnings) is not something I'm going to be doing on all 16 of those test platforms, and not only because a lot of those other packages won't even *build* in half of those 16-odd test environments (like, say, mingw64, no you cannot build glibc there). I do not expect the set of architecture-dependent things to reduce over time: in fact it will probably grow a bit as we try to do more.

So even in relatively simple projects more or less suitable for maintenance by a one-man band there is always going to be a tradeoff between test throughness on obscure architectures and the ability to actually get anything else other than endless tests done at all. For bigger projects testing on obscure platforms as thoroughly as you test on mainline ones is simply impossible.

So layering is no longer a thing?

Posted Mar 16, 2021 18:03 UTC (Tue) by immibis (subscriber, #105511) [Link]

Really, all that is needed here is a backend for LLVM that outputs C code, which you can then compile with your platform-specific C compiler of choice.

So layering is no longer a thing?

Posted Mar 1, 2021 17:37 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (11 responses)

> If the coreutils authors were to say, "What do you mean you ported 'rm' to the Raspberry Pi <or insert any random platform here>? Did anyone tell you it would be okay to do that? Don't you know we can't test 'rm' on every possible niche architecture out there?"

If they said that, specifically in response to a bug report of the form "rm doesn't work on the Raspberry Pi," and said bug was not reproducible in any of their target environments, then IMHO that (followed by a WONTFIX) would be a perfectly reasonable response. Of course, they might choose to help the person anyway, but I see no reason such support (in a bug tracker, not on a mailing list etc.) should be considered obligatory.

> Many people built their software on top of Python believing that Python would be such a layer they could depend on. That included an ecosystem of Python libraries that make life as an application developer easier.

So what? All OSS software, of any kind, is "NO WARRANTY," remember? If you're not paying for it, then the maintainers are doing you a favor by providing you with bug fixes etc. They are under no obligation to continue doing that favor just because it happens to be convenient to you.

> If suddenly one day, part of Python itself were implemented in COBOL and applications that depend on Python stopped working with new Python releases, unless they ran on systems that have a fully functional COBOL toolchain, is that reasonable? Can the application developers be blamed because they made bad assumptions that they were depending on a layer that's not as portable as they thought it would be?

We already went through a smaller-scale version of this problem. It's called Python 3.x. Lots of applications broke. People grumbled about it. Some people left the Python community entirely. If the core developers decided to do that again, it might kill off the platform altogether. But it's their platform to kill, if they choose to do so.

Yes, you can fork, if you really want to. That also applies to the cryptography library we're ostensibly discussing. Nobody has forked it (to my knowledge) because nobody is willing to put in the effort required to continue supporting these platforms. If nobody is willing to do something, then that thing will not get done. It is not reasonable to simply point at whoever was doing it last and say "now that person needs to continue doing that thing until the end of time."

So layering is no longer a thing?

Posted Mar 1, 2021 18:17 UTC (Mon) by eery (guest, #142325) [Link]

> So what? All OSS software, of any kind, is "NO WARRANTY," remember? If you're not paying for it, then the maintainers are doing you a favor by providing you with bug fixes etc. They are under no obligation to continue doing that favor just because it happens to be convenient to you.
> Yes, you can fork, if you really want to. That also applies to the cryptography library we're ostensibly discussing. Nobody has forked it (to my knowledge) because nobody is willing to put in the effort required to continue supporting these platforms. If nobody is willing to do something, then that thing will not get done. It is not reasonable to simply point at whoever was doing it last and say "now that person needs to continue doing that thing until the end of time."

It's true that no-one is under any sort of legal obligation or is legally liable for changes to OSS software -- the "contract" you agree to when using the software (whether GPL or BSD or whatever) always includes some sort of liability waiver for a reason. You use it at your own risk.

What makes this all so awkward is that while there's no legal obligation to do anything, and really there isn't even a social obligation, you end up with these projects that are near the bottom of dependency stacks for huge, huge software projects. And when they introduce some radical, upending change it's just a headache for upstream.

I don't really like what happened here, but I also agree that the developers have no real obligation to do whatever. Part of me wishes that instead of bolting on Rust to projects like this or librsvg, there would be a fork or new project introducing the Rust components, but again, I realize that if all the (unpaid) developers want to go in a direction, there's no reason why they should have to keep maintaining the old version.

So layering is no longer a thing?

Posted Mar 2, 2021 3:55 UTC (Tue) by gnu_lorien (subscriber, #44036) [Link] (9 responses)

"Yes, you can fork, if you really want to. That also applies to the cryptography library we're ostensibly discussing. Nobody has forked it (to my knowledge) because nobody is willing to put in the effort required to continue supporting these platforms. If nobody is willing to do something, then that thing will not get done. It is not reasonable to simply point at whoever was doing it last and say "now that person needs to continue doing that thing until the end of time.""

I tend to agree here. The version that doesn't depend on Rust wasn't taken from anybody. It's just not being maintained. If somebody is interested in maintaining and updating an implementation that doesn't involve Rust it seems that they could easily fork it from here: https://github.com/pyca/cryptography/tree/3.3.x.

It feels like sometimes a situation like this where the developers, in earnest, added a new dependency that they thought would improve the package is treated the same as when a project is changed in order to add crypto-mining or some sort of backdoor. Nobody did this maliciously. If you disagree with the new direction the code you'd need to chart your own course is ready and waiting.

So layering is no longer a thing?

Posted Mar 2, 2021 7:38 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (8 responses)

> I tend to agree here. The version that doesn't depend on Rust wasn't taken from anybody. It's just not being maintained.

Imagine a hypothetical situation where the LLVM developers change their code in such a way that it can no longer be used for Rust for code generation. Would you also accept the answer "Well, then go and maintain an old fork of LLVM yourself!"?

Or a big company like Google becomes in charge of the Linux kernel development and make such drastic changes to the codebase that the kernel will only run on ARM and POWER devices. Would you also accept the answer "Well, then go and maintain an old fork of the kernel yourself if you need it for x86_64!"?

My point is: Linux distributions are made possible because traditionally because stable APIs made it possible for everyone to assemble their system of choice on their platform of choice. Be it Debian with a FreeBSD kernel or an open source fork of Solaris on x86_64. Everyone has been able to use what they prefer in the configuration they wanted with the individual components such as GNOME, Emacs, GCC, libpng etc being highly portable thanks to using stable APIs.

Now all of a sudden, the Rust community comes around, takes core projects and rewrites them in Rust causing a conflict with users which use exotic setups. And instead of trying to intermediate according to their "openeness and inclusivity" principles, they just alienate these users and tell them to stop using their custom setups and just use something more mainstrean.

The Rust community is hampering their own success as there are projects that would like to use Rust code, but they are currently holding off these plans due to the fears of reduced portability.

So layering is no longer a thing?

Posted Mar 2, 2021 8:38 UTC (Tue) by Wol (subscriber, #4433) [Link]

But I'll throw into the mix this ...

The llvm community will welcome you with open arms if you want to add a new back-end for S390 or S390x or whatever exotic architecture you fancy PROVIDED you take it upon yourself to ensure there exists a community to support it.

Isn't that the PERFECT open source attitude?

Cheers,
Wol

So layering is no longer a thing?

Posted Mar 2, 2021 8:56 UTC (Tue) by roc (subscriber, #30627) [Link] (2 responses)

> the Rust community comes around, takes core projects and rewrites them in Rust causing a conflict

You make it sound like the Rust community is roaming around taking over projects. That isn't true at all.

So layering is no longer a thing?

Posted Mar 4, 2021 8:18 UTC (Thu) by zdzichu (subscriber, #17118) [Link] (1 responses)

If they are writing the code in Rust, they're in definition a Rust community, no?
If librsvg maintainer writes part of it in Rust, isn't this Rust community rewriting librsvg?

So layering is no longer a thing?

Posted Mar 4, 2021 9:22 UTC (Thu) by mpr22 (subscriber, #60784) [Link]

> If they are writing the code in Rust, they're in definition a Rust community, no?

Here's a scenario:

J. Random Maintainer maintains Project X, currently in C.

J. Random Maintainer has to deal with a lot of memory safety bugs, because memory safety bugs are really easy to write in C because it's basically completely useless at stopping you from writing them.

Project X's performance and/or interface requirements make a lot of memory-safe languages look unsuitable to varying degrees.

J. Random Maintainer sees Rust. Investigating it, they find that Rust (a) compiles to native machine code (b) checks at compile time a lot of things that other safe languages have to check at run time (c) does not rely on a garbage collector or large common runtime library (d) can be used to build libraries presenting a C-compatible API+ABI (e) is very vigorous about forbidding some kinds of memory safety violation and making you actively flag the places where you commit others.

J. Random Maintainer decides that new development on Project X will now happen in Rust, and possibly even that the existing code base should be migrated (incrementally or flag-day) to Rust, because they're sick of fixing memory safety bugs reported in deployed versions.

This is not "the Rust community coming in and rewriting the project in Rust" but rather "the Project X maintainer deciding Rust is a really neat idea and adopting it".

The difference may not matter to those users of Project X whose systems aren't supported by the Rust toolchain, of course, but that doesn't make conflating the two things a good idea.

So layering is no longer a thing?

Posted Mar 2, 2021 10:42 UTC (Tue) by farnz (subscriber, #17727) [Link]

Rust has maintained a fork of LLVM as part of the rustc project for a considerable period of time, because they've been fixing LLVM bugs that aren't visible in C++ code, but are in Rust, and they needed a fork to make it work while LLVM upstream took the fixes onboard. That's becoming less of an issue now, because LLVM is taking Rust seriously.

And in this case, it's not the Rust community that took on the rewrite - it's pyca, the upstream you were already depending on!

I would also argue that the stability we've seen in recent years has been an exception; I gave up on using Debian/MIPS for a desktop and then for a server because the Open Source community didn't take bugs on that platform seriously, recommending that I moved to x86 instead. This has always happened to minority platforms, it's just now that it's being made very visible by the use of a new language, instead of being hidden in the architectural oddities of a GCC-supported platform that nobody cares about.

And it's a natural extinction event for architectures where no-one cares enough to put the work in. Motorola 68k and compatibles look likely to survive, because there are people who care enough to do the work in their spare time. S390 is likely to die, because no-one cares enough, but S390x will survive because IBM cares. After this, we'll have better visibility into what platforms actually work - instead of people thinking that HPPA, S390, MIPS and other architectures must work (when in fact the code compiles but is unreliable due to - for example - differences in mmap behaviour, which is one I got bitten by), they'll be able to see which architectures do actually have people who are willing to fix up bugs.

So layering is no longer a thing?

Posted Mar 2, 2021 15:42 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Imagine a hypothetical situation where the LLVM developers change their code in such a way that it can no longer be used for Rust for code generation. Would you also accept the answer "Well, then go and maintain an old fork of LLVM yourself!"?

Or a big company like Google becomes in charge of the Linux kernel development and make such drastic changes to the codebase that the kernel will only run on ARM and POWER devices. Would you also accept the answer "Well, then go and maintain an old fork of the kernel yourself if you need it for x86_64!"?

First of all, these things are incredibly unlikely to happen to a big project like LLVM or the Linux kernel. LLVM wants to maintain support for popular languages, and Rust is now popular enough that it's going to be considered an essential language to support. You're far more likely to see support added to GCC than removed from LLVM. And no single company can take over a project like the Linux kernel as long as dozens of companies provide important developers. Those kinds of things can only happen to a project that is already in deep trouble.

But secondly, the answer is that yes, the community should fork if its leaders are trying to take it in the wrong direction. When a few obnoxious developers have tried to take projects in directions that were unpopular with the broader development base, one of two things tends to happen. Either the lead developers are deposed (in communities where that's possible) or the project forks, and the side with broad developer support quickly surpasses the side with toxic leadership. If the community doesn't have the ability to support a fork that retains the key features they wanted, they probably didn't have the ability to maintain those features within the older project.

Seriously, if the main branch of the kernel stopped supporting X86_64, the fork would have way, way more developers than the ARM-only branch. The only way the opposite situation would happen is if X86_64 became a small enough platform that it was having trouble getting enough developers. Dropping support for old, minority platforms is not a sign of a failed community; it's a sign of lack of support for those old, minority platforms.

So layering is no longer a thing?

Posted Mar 3, 2021 12:16 UTC (Wed) by ceplm (subscriber, #41334) [Link]

> Imagine a hypothetical situation where the LLVM developers change their code in such a way that it can no longer be used for Rust for code generation. Would you also accept the answer "Well, then go and maintain an old fork of LLVM yourself!"?

EVERY piece of code you use from somebody else is a potential problem for you, each of those thousands NPM packages you have in node_modules/ is a reliability of your project, and you are responsible for doing something when it stops working for you. You did study all that code when including it to your project, the code which you effectively ship to your clients, right?

And if you maintain code for s390 than you are not doing it for free (there are some things in the world, which people don't do for free; I will keep the list PG-13, but maintaining s390 software certainly belongs there). Either you switch to use some other cryptography library (hey, I am an upstream maintainer of M2Crypto!), fork pyca and keep updating it in the C-only form, or port LLVM to your platform. Choice is yours. And THAT is the choice Linux is all about. http://www.islinuxaboutchoice.com/

So layering is no longer a thing?

Posted Dec 22, 2022 6:06 UTC (Thu) by mrugiero (guest, #153040) [Link]

> Imagine a hypothetical situation where the LLVM developers change their code in such a way that it can no longer be used for Rust for code generation. Would you also accept the answer "Well, then go and maintain an old fork of LLVM yourself!"?
>
> Or a big company like Google becomes in charge of the Linux kernel development and make such drastic changes to the codebase that the kernel will only run on ARM and POWER devices. Would you also accept the answer "Well, then go and maintain an old fork of the kernel yourself if you need it for x86_64!"?

You may not like it, but that's 100% how any kind of community effort works. Either you do the work, or you don't. If Google becomes in charge of mainline Linux kernel development and does whatever, either a community of devs decides to fork, or they don't and it becomes Google's shop. Under no circumstances Google decides it's time to think of the poor little users. If you expect that, you'll live floating from disappointment to disappointment.
So, yes, that's an answer I would accept, because just as I value my time and expect you to respect it I owe other people respect for theirs.

> My point is: Linux distributions are made possible because traditionally because stable APIs made it possible for everyone to assemble their system of choice on their platform of choice. Be it Debian with a FreeBSD kernel or an open source fork of Solaris on x86_64. Everyone has been able to use what they prefer in the configuration they wanted with the individual components such as GNOME, Emacs, GCC, libpng etc being highly portable thanks to using stable APIs.

More than anything else, distributions are made possible by people doing the work, not by the spectators booing from the grades.

> Now all of a sudden, the Rust community comes around, takes core projects and rewrites them in Rust causing a conflict with users which use exotic setups. And instead of trying to intermediate according to their "openeness and inclusivity" principles, they just alienate these users and tell them to stop using their custom setups and just use something more mainstrean.

Now all of a sudden, the exotic setup users tell the developers to stop using their tool of choice that makes their lives easier to stick to that stuff they need to keep their exotic setup working, and do it for free while they're at it.
So, why do we expect these devs to keep supporting these users' software when they're not willing to do the same. The first sign of laziness is feeling insulted when someone suggests you should pull up your sleeves and do the things that are on your own interest (and possibly only yours).
Pragmatically there's only three ways to solve this disagreement:
- Fork the original code;
- Implement support for your fringe platform in LLVM;
- Use the old version.
Note how none of them is "make the other person do it for me"?

> The Rust community is hampering their own success as there are projects that would like to use Rust code, but they are currently holding off these plans due to the fears of reduced portability.

Such as?

So layering is no longer a thing?

Posted Mar 5, 2021 7:31 UTC (Fri) by ssmith32 (subscriber, #72404) [Link] (1 responses)

>There is a concept of layering in this industry. The assumption is that if you build your software on top of a given set of well-defined layers, your program will reliably work on any system, so long as those underlying layers have been made to work on those systems.

Except if "the system" is one of those things in the set of well-defined layers or if one person's "made to work" is another's "patchy, unmaintained, and only works on the second Tuesday" broken port.

So layering is no longer a thing?

Posted Mar 6, 2021 21:28 UTC (Sat) by NYKevin (subscriber, #129325) [Link]

"The system" is just a funny way of writing "the layers that are below me." To a kernel developer, "the system" is the architecture and microcode, as well as other hardware and firmware that the kernel knows about. To an application developer, "the system" might be Debian or Fedora. For cloud users, "the system" is AWS or GCP (or one of their competitors), and for web developers, "the system" is Chrome or Firefox (or one of their competitors).

Personally, I think Randall Munroe got this right in https://xkcd.com/1349/. By default, nothing works unless someone makes it work. The fact that software X works on system Y, despite the designer of X never having interacted with Y, is the result of someone else doing a whole lot of work in terms of porting and maintaining GCC, libc, the kernel, etc. It was never "free"; it was just amortized over a large set of other software that happens to have the same set of dependencies as X. If X takes on a new dependency, then that needs to be ported too, but this is not a deficiency in either X or Y, it's just the way things work.

More control over binary builds

Posted Mar 1, 2021 17:12 UTC (Mon) by epa (subscriber, #39769) [Link] (2 responses)

This reminds me of the erstwhile Linux Hater, whose blog pointed out several problems with Linux on the desktop, but one of whose main points was that the application developer has too little control over the final build and the environment it's run in. The same source code will be rebuilt atop st a shifting sand of compiler versions and standard library versions, then dynamically linked against whatever version of various shared libraries is installed, as long as they claim to be compatible by major version number. In the Hater's view, the Windows model of producing a single known binary release was superior. Then if debugging a problem on an end user's system you know exactly what you're dealing with.

More control over binary builds

Posted Mar 16, 2021 18:07 UTC (Tue) by immibis (subscriber, #105511) [Link] (1 responses)

You would think so, but the binary build has the same problem from the other end: the end user can't tweak the software, even if they want to!

And while Microsoft has a good record with backward compatibility, it's not 100% - if they change the system *anyway*, the end user (no matter how technical) often has little ability to diagnose or fix the problem. If Linux followed the Microsoft model, the Raspberry Pi simply wouldn't exist.

More control over binary builds

Posted Mar 20, 2021 8:53 UTC (Sat) by epa (subscriber, #39769) [Link]

I am not saying the binary should be the only thing that exists, rather that the 99% of users who don't want local changes can download a known build. Big projects like Mozilla and Libreoffice follow this model, except where they are packaged by Linux distributions.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 17:45 UTC (Mon) by anholt (guest, #52292) [Link] (5 responses)

I went through this recently in the Mesa project, where I am working to remove tens of thousands of lines of unmaintained, architecturally incorrect (as in "no route to passing the conformance tests"), non-CI-tested, unused code. Another developer complained that my alternative (the less-unmaintained, architecturally-plausible, CI-tested software rasterizers) would be slower on non-LLVM architectures for the debian packages which were using the old driver.

So I looked at debian's popcon, and the sum of all of the non-LLVM architectures installing Mesa's drivers was .02% of the install base of amd64 alone. And that's in debian, where the hobby maintainers of those architectures are more likely than the average user (or even more so the users of debian derivatives) to have popcon installed and thus are overrepresented.

In the end, I just talked to the debian package maintainer, who said (paraphrasing) "Oh, I can just flip the non-llvm architectures away from the old driver and have less untested special-case package build system for these dead arches? I'll do that right now" and then I got to go delete stuff now since it was unreachable from any distribution. But I don't think that should have even been necessary.

Hobby architectures are fun. I've got my own spare-time project for supporting a hardware driver in Mesa that's been effectively unmaintained for a decade or more since it has basically no users left. But hobbyists like me need to understand that software maintenance is work and nobody else has to keep our fun time projects going.

And, for the Red Hats of the world taking money for shipping s390x, it's actually their job then to make s390x work, not anyone else. But then, I don't see RHers doing so in my world, it's more in the open source purists where I see the push-back against regressing hobby hardware.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 17:54 UTC (Mon) by rvolgers (guest, #63218) [Link]

One thing that got kind of buried in the discussion is that Rust actually supports s390x and provides official releases for it: https://doc.rust-lang.org/nightly/rustc/platform-support....

It's just that some distros didn't package it on that architecture, and some people *did* actually need s390 and then others read the discussion as saying s390x was unsupported.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 18:15 UTC (Mon) by pizza (subscriber, #46) [Link] (2 responses)

> And, for the Red Hats of the world taking money for shipping s390x, it's actually their job then to make s390x work, not anyone else. But then, I don't see RHers doing so in my world, it's more in the open source purists where I see the push-back against regressing hobby hardware.

Let's not forget that it's those same "open source purists" that have received little to no direct compensation for their efforts over the years and are now finding that the social contract has been unilaterally changed on them.

That "non-hobby" stuff cuts both ways; why should they continue supporting non-hobby users when it's clear it's not reciprocated?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 18:54 UTC (Mon) by anholt (guest, #52292) [Link] (1 responses)

I've made a lucrative career out of being an open source purist[1]. So have the open source developers I work with daily.

I've also been solidly on the hobbyist side, back when I was a college student porting drivers to FreeBSD just because I wanted to (though I did a couple of times get a "free" graphics card from a contractor who would have had to do the porting otherwise, and I thought that was a pretty sweet deal at the time and I still appreciate what they did for me).

I agree there are some weaknesses in corporate interests supporting shared infrastructure of open source, particularly for efforts like reproducible builds or CI farms. But of the open source software that I'm relying on, I see it overwhelmingly written and maintained by caring open source developers who are doing it in their day jobs, not by people like me-in-college doing it for fun.

[1] I've stooped as low as "run closed source simulators as part of otherwise pure open source driver development" and "hold unreleased-hardware driver code behind closed doors for up to a year before releasing under the MIT license", and I once got burned by a vendor who never let us release our few thousand lines of work we did before we caught on that the project was doomed. So, not as pure as the more pure people I know, who incidentally are also employed full time in free software.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 20:43 UTC (Mon) by Wol (subscriber, #4433) [Link]

> [1] I've stooped as low as "run closed source simulators as part of otherwise pure open source driver development" and "hold unreleased-hardware driver code behind closed doors for up to a year before releasing under the MIT license",

If you're not a Free Software Fanatic, why not? And even then, when you look at the RMS printer story, what you did should be perfectly okay!

> and I once got burned by a vendor who never let us release our few thousand lines of work we did before we caught on that the project was doomed. So, not as pure as the more pure people I know, who incidentally are also employed full time in free software.

Which is when you realise the NDA should have said "while the product is in pre-release development", not "until the product is released".

Cheers,
Wol

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 17:15 UTC (Tue) by daenzer (subscriber, #7050) [Link]

I merely pointed out:

* The old swrast code is usable with some real-world interactive apps, while softpipe isn't.

* At the time there were still Debian architectures where swrast was shipped and llvmpipe wasn't available.

I never made any claims about how many users this would actually affect. Even if it's > 0, one can certainly make an argument that the number is too small to continue offering swrast as a choice. All I asked was for this to be a conscious and honest decision, so that any affected users might have felt treated with a modicum of respect.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 18:39 UTC (Mon) by jfred (guest, #126493) [Link] (53 responses)

I put this one last because it’s flippant, but it’s maybe the most important one: outside of hobbyists playing with weird architectures for fun (and accepting the overwhelming likelihood that most projects won’t immediately work for them), open source groups should not be unconditionally supporting the ecosystem for a large corporation’s hardware and/or platforms.

I have such mixed feelings about this. On one hand, I agree that it's unreasonable for developers to be expected to explicitly support all sorts of hardware that's realistically only ever going to be used by large corporations (or hobbyists playing with old enterprise hardware). The large corporations can and should pay for work on it.

On the other hand, there's something about this that bothers me: the more that this attitude becomes the norm, the more difficult it is for new architectures to gain ground in the first place. While I haven't looked at the numbers on this specifically, I would bet that (until relatively recently/aside from the kernel itself thanks to Android) a significant chunk of the work on getting Linux software running on ARM was done by hobbyists. The individual developers probably didn't care much, but their software would run on ARM Linux nonetheless, and there were plenty of volunteers working on making that happen. And the end result is that I can take a random ARM SBC running Debian and install basically whatever I want on it from Debian's repos, and approximately nothing aside from that. (Yes, this has been improving lately due to increasing corporate interest in ARM servers; no, I don't think that invalidates the point.)

That said, the article does make good points about C not being truly cross-platform and the problems that causes, which is unfortunate given that a significant portion of our system software is written in it.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:30 UTC (Mon) by roc (subscriber, #30627) [Link] (1 responses)

It would be interesting to make a list of what a new architecture has to do to run a modern Linux desktop distro well and look at how that changes over time. For example I think until recently you needed a gcc backend but not an LLVM backend. You're starting to need both. But maybe soon, or even now, you don't need a gcc backend.

A few other things I can think of:
-- Linux kernel port
-- V8 JIT backend
-- Optimized codecs for some popular formats (JPEG, H264, AV1)

There must be more...

Woodruff: Weird architectures weren't supported to begin with

Posted Dec 22, 2022 5:55 UTC (Thu) by mrugiero (guest, #153040) [Link]

> -- Linux kernel port

Certainly.

> -- V8 JIT backend

If Microsoft is to be believed, this may not be necessarily true[0].

> -- Optimized codecs for some popular formats (JPEG, H264, AV1)

This depends. Does the architecture support PCIe? Can it use an existing GPU for that? Is the driver portable in terms of word size and endianess and all that?

[0]: https://microsoftedge.github.io/edgevr/posts/Super-Duper-...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 2:28 UTC (Tue) by sionescu (subscriber, #59410) [Link] (48 responses)

The entire piece has an underlying assumption that users shouldn't be able to run software on platforms that are not explicitly authorized and tested by the authors, and that goes against the spirit of the open source community.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 9:52 UTC (Tue) by xophos (subscriber, #75267) [Link] (6 responses)

Nobody is saying they shouldn't, but also nobody is volunteering to do the work of porting llvm to some of these architectures. And nobody is to blame for that.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 16, 2021 18:10 UTC (Tue) by immibis (subscriber, #105511) [Link] (5 responses)

But people *are* saying that hey, it's pretty neat that this all mostly worked until now, and all you needed was a C compiler as a "standard code generator". Maybe we should design some system that maintains this property, just to make architecture porting easier in the future. (My money would be on a C backend for LLVM)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 17, 2021 7:32 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (3 responses)

> My money would be on a C backend for LLVM

If you already have a decent C compiler for the new architecture then that might help. However, if this is an emerging architecture which doesn't have any toolchain yet, wouldn't it be simpler to implement an LLVM backend for that architecture (perhaps as a fork if it doesn't meet the requirements to be accepted upstream) rather than write a new, modern, standards-compliant C compiler from scratch? Implementing the LLVM backend would also enable compilation of various other languages based on LLVM, not just C, so you would get more benefit from (arguably) less work.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 17, 2021 8:04 UTC (Wed) by johill (subscriber, #25196) [Link] (2 responses)

> If you already have a decent C compiler for the new architecture then that might help.

There are probably *dozens* of (niche?) compilers for various CPUs. I was recently asked if something like wifi firmware could be done using rust, but given the toolchains involved, there isn't really a chance of that. One system which is documented is for ar9170 (https://wireless.wiki.kernel.org/en/users/drivers/carl917...) and that uses an sh-elf gcc compiler. And this is for an upper MAC, lower MAC stuff is often even more arcane, see e.g. https://bcm-v4.sipsolutions.net/802.11/Microcode/.

Having a C backend would make rust useful in a lot of places like this.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 18, 2021 1:01 UTC (Thu) by pabs (subscriber, #43278) [Link]

Another one is the ath9k_htc firmware, that uses a fork of the GCC xtensa code:

https://github.com/qca/open-ath9k-htc-firmware/

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 18, 2021 16:02 UTC (Thu) by nybble41 (subscriber, #55106) [Link]

Yes, I agree that a C backend would be useful for cases like that. Though it might be a bit difficult to come up with a subset of C that results in reasonably performant code supported by all those niche compilers and can still represent everything that can be done with LLVM, including some features which were implemented specifically to support constructs from other languages which have no direct equivalent in portable C. (E.g., in Rust an unbounded empty loop is well-defined, but the same is not true in C.) The point was more that, in the future, I think it would be better to plan to implement an LLVM backend for each newly developed CPU—which gives you a C compiler, among others, "for free"—rather than planning to bootstrap everything through generated C code.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 17, 2021 16:15 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

There are at least 3 different C backends for LLVM, in various states of disrepair.

Some time ago my friend tried to use Keil C compiler for 8051 (yeah, they are still widely used). The basic stuff worked outright, simple C++ programs compiled and ran. But it was slow, and the compilation process loosed special 8051-specific extensions that are needed in practice. They ended up abandoning C++ and just writing stuff in pure C.

I've heard that people trying to use LLVM for 8-bit x86 DOS have the same issues.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 14:44 UTC (Tue) by marduk (subscriber, #3831) [Link]

Open Source is a gift. A privilege, not a right. The underlying assumption is that those who labor to gift us with open source software provide it "'as is' without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose"

When such a gift does not scratch our particular itch, we are given the freedom to modify it so that it does, pay someone else to, or return the unused portion for a complete refund.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 3:14 UTC (Wed) by rodgerd (guest, #58896) [Link] (39 responses)

The entire point of the article is that offering a bunch of semi-tested binaries that don't actually do what they say is dishonest. And it's spot on.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 3:48 UTC (Wed) by pizza (subscriber, #46) [Link] (38 responses)

> The entire point of the article is that offering a bunch of semi-tested binaries that don't actually do what they say is dishonest. And it's spot on.

In other words, we can only trust binaries that are officially created, distributed, and/or blessed by the upstream project, and everyone else providing binaries is being "dishonest"

...Right.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 11:06 UTC (Wed) by farnz (subscriber, #17727) [Link] (36 responses)

No - it's fine to create binaries that aren't blessed by the upstream project; but when you do so, you cannot then expect the upstream project to be on the hook for keeping them working.

What's dishonest is presenting untested binaries to users as "supported on all platforms", when upstream doesn't support those platforms and nor do you. What's really dishonest is then expecting upstream to change their plans because you promised a level of support that upstream didn't offer you (let alone promise to provide), and now you're upset that upstream changes are making that promise hard to keep.

If you really need upstream to help you out, then you need to communicate with upstream, and make sure that they're happy to do things in a way that lets you keep your promises; if that means nothing more modern than ANSI C89 (not even ISO C90), then you need upstream to agree to that. If it means "anything supported by CPython or its build deps when using GCC", then you need upstream to agree to that. Alternatively, you as the person making promises to your users need to be ready to cope with what upstream does.

Similar applied back when Canonical had a project manager make demands of me relating to a bug I was fixing for my own reasons in the i915 kernel driver DisplayPort support; it may be challenging for you to offer the support you've promised without upstream's help, but that's not upstream's problem - that's yours.

Again, turning it round helps make it clearer; Red Hat sell supported products with my code in; am I on the hook to Red Hat for anything if their customers have problems with code I wrote?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 12:54 UTC (Wed) by pizza (subscriber, #46) [Link] (35 responses)

> What's dishonest is presenting untested binaries to users as "supported on all platforms", when upstream doesn't support those platforms and nor do you.

By your standard, the actual amount of "supported" software out there is so small it might as well be thermal noise. Even those that claim to "support" their stuff don't actually do so, when push comes to shove -- "this software is provided as-is without any warranty whatsoever". Indeed, even what you are calling "support" is being "dishonest" because if you don't have a legal contract that spells out what "supported" actually means (and the non-zero amount you're going to pay for it) the default position is *none whatsoever*.

If your point is that "supported" is a nonsensical term to use in F/OSS (or really, most any) context, and new terms need to be invented that reflect the varying degrees of "we expect this to work but we can't make any guarantees that it will, or that we will fix it if it doesn't" that exist across the F/OSS spectrum, then sure, I agree with you. But that's already the status quo, whether you use the term "supported" or something more nuanced.

Meanwhile, you keep fixating on "untested binaries" -- So does the distro running the upstream test suite (if any) as part of its build sufficient to make those binaries "tested"? And if there's no test suite, what makes upstream's binaries any more "supportable"? I should point out that for the overwhelming majority of F/OSS, "upstream" does not provide anything more than bare source code. Does that mean that this software is inherently "unsupportable"?

> Again, turning it round helps make it clearer; Red Hat sell supported products with my code in; am I on the hook to Red Hat for anything if their customers have problems with code I wrote?

Only if Red Hat pays your salary. Which, to be fair to Red Hat, they tend to do when it comes to the software they ship.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 13:33 UTC (Wed) by farnz (subscriber, #17727) [Link] (31 responses)

And there's the thing - what "supported" means matters - there's a difference between "we won't stop you trying to use it, but it may not even compile, let alone do what it's supposed to do", "it will definitely compile to a binary, but may not work as intended and we won't fix it" and "we will make a best-effort attempt to keep it working on this platform". And there's a big difference between "we will make a best-effort attempt to keep it working on this platform" and "if it does not work for you, we will fix it". If you're going to offer one level of support, you need to either be able to offer it yourself, unaided by upstream, or have upstream agree to limits on what they do that make it possible for you to offer that level - be it upstream on the hook for "best effort support for HP-PA", or "we won't use languages not supported by GCC". So, you need to be clear on what you mean by support - and by my standards, most projects are reasonably honest about what they mean (they promise to make their best efforts to help me resolve issues, but don't promise a fix). Dishonesty comes in when you promise one level of support (e.g. "best-effort"), while not putting in place the needed bits to allow you to uphold that promise, on the basis that "nothing will go wrong".

The current fuss is happening because the pyca upstream were offering "we won't stop you trying to use it, but it may not even compile" levels of support for niche platforms, while the Gentoo downstream was offering "we will make a best-effort attempt to keep it working on this platform", along with a certain amount of fuss because Alpine Linux had issues that it's working on resolving.

It all explodes because instead of the Gentoo downstream saying either "right, we'll sort out what we need to support pyca on HP-PA (and S390 etc), thanks for the advance warning that we'll need to do this", they instead went to "how dare you make a change that makes our support promise that we never told you about, let alone got you to agree to, hard to meet". If this is the level of support that Gentoo wants to offer to HP-PA users for pyca, then it either needs to get upstream in on the promise (hence only using things that should work on HP-PA), or needs to do the work themselves despite upstream (forking, implementing dependencies, whatever).

And while Red Hat is good about not asking random contributors to be on the hook to their customers, not all distributions are that nice. I've had to tell one distribution company (not Red Hat) that I am not going to backport my kernel patches for them, nor am I prepared to assist their customers with the problem they had that my patch fixed for me, and that threatening me with legal action is not going to help, either.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 14:53 UTC (Wed) by pizza (subscriber, #46) [Link] (30 responses)

> Dishonesty comes in when you promise one level of support (e.g. "best-effort"), while not putting in place the needed bits to allow you to uphold that promise, on the basis that "nothing will go wrong".

Okay, fair enough. (Though IMO the term "best-effort" is highly subjective, and anyone who interprets that as more than "you're entirely on your own" is a naive fool...)

>And while Red Hat is good about not asking random contributors to be on the hook to their customers, not all distributions are that nice. I've had to tell one distribution company (not Red Hat) that I am not going to backport my kernel patches for them, nor am I prepared to assist their customers with the problem they had that my patch fixed for me, and that threatening me with legal action is not going to help, either.

They actually threatened you with legal action? Just... wow. That's a level of douchiness that even I've not run into.

I've found that the best way to shut up entitled folks is to apply the "charge 'em 'till you like 'em" principle" -- "I estimate what you're asking for will take me $X hours. My standard scheduled-well-in-advance rate is $Y. If you want it done faster than that, my rate will be $Y^$N, where $N is directly proportional to the turnaround time, how little I care about your problem, and your demonstrated level of douchiness."

I rarely hear back after that.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 15:44 UTC (Wed) by farnz (subscriber, #17727) [Link]

I'd agree that best-effort is subjective, and that, absent a legal contract defining it, is meaningless - and that for all open-source, the legally required level of support is "none at all - you're on your own". That said, we're also discussing the social context, and what is and is not fair to expect of your upstreams.

In the social context, there are two lines of dependency:

  1. What upstreams explicitly promise to their downstreams. While you have no way to enforce that promise, it seems unfair to state that when making your own social promises, you can't rely on other people's social promises. For example, upstream may promise
  2. What is implicitly promised by upstream to their downstreams by upstream's behaviour. This is largely what's triggered this fuss - upstream changed behaviour by using a new programming language, and downstreams are upset.

Here, we had a mismatch between the implicit promises upstream thought they were making (to provide a Python cryptography module that has decent security, is easy enough to use correctly that it can be your first choice for cryptography, and works on an explicit set of platforms), and the implicit promises that downstream thought were being made (to provide a Python cryptography module that only depends on things that can be built for any Linux platform using GCC). The result is pain for the downstream that thought that more was promised than actually was, and the fuss is because that downstream is trying to push the pain back upstream rather than accepting that upstream never promised them what they need.

So, the legal position is clear - downstream has no way to enforce that upstream does what they want. The social position is less clear; did upstream give downstream reason to believe that upstream would behave in a way that downstream is happy with, or did downstream expect something unreasonable of upstream, given what upstream has explicitly promised? Personally, I view downstream as in the wrong here - they made assumptions about what upstream was going to do, didn't tell upstream what they expected of them, and then got upset when upstream didn't live up to their assumptions.

And yep, they got douchy. I told my employer what they wanted, my employer wrote back with a quote for my time (vastly inflated, of course, and would have resulted in a nice bonus for me and my colleagues - small firm), and they wrote back to me directly threatening to sue because I had not fixed their customer's problem, and the fact that my employer was willing to quote them for taking up my time implied that I had incurred liability for the problem that I had not caused, merely failed to fix to their satisfaction. I passed that back to my employer, we contracted a suitable lawyer, said company paid our lawyer's fees to make us go away. I assume the douchy person got thoroughly reprimanded for threatening me - it certainly meant that when one of their colleagues contacted me asking about the intention behind my fix, I replied pointing to the legal threats and saying that I would not work with them as a result.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 15:09 UTC (Thu) by johannbg (guest, #65743) [Link] (28 responses)

" They actually threatened you with legal action? Just... wow. That's a level of douchiness that even I've not run into. "

Well it's getting even better or worse depending how you look at it.

Corporations with or through governments and or governments themselves are trying to implementing a real names policy for project owners and or contributors to opensource software for one purpose only accountability/liability.

Take for example this fairly recent Google "security" post. [1]

" Goal: For Critical Software, Owners and Maintainers Cannot be Anonymous

Attackers like to have anonymity. There have been past supply-chain attacks where attackers capitalized on anonymity and worked their way through package communities to become maintainers, without anyone realizing this “new maintainer” had malicious intent (compromising source code was eventually injected upstream). To mitigate this risk, our view is that owners and maintainers of critical software must not be anonymous.

It is conceivable that contributors, unlike owners and maintainers, could be anonymous, but only if their code has passed multiple reviews by trusted parties.

It is also conceivable that we could have “verified” identities, in which a trusted entity knows the real identity, but for privacy reasons the public does not. This would enable decisions about independence as well as prosecution for illegal behavior. "

So individual(s) decide to create a F/OSS project and after a while it gets picked up by some corporation or government, with or without the projects creators/owners knowledge about it.

It gets classified as a core component of said corporations product portfolio or government infrastructure or simply is being used in illicit purposes without their knowledge.

The projects creators/owners go about their usual F/OSS business, write/remove/review/accept & commit their own code or a contribution from someone which fixes 1 bug but introduces 3 new ones.

That "fix" introduces "a security flaw" in the project or otherwise prevents it from working "normally" in the corporations "product" or government infrastructure and suddenly the project owner(s) and or it's contributors have become liable in the process.

You as the project owner for a) not validating that "John Doe" was a real person and exactly that "John Doe" on the planet. b) for not being all knowing in the language being used in your project and committing the fix that lead to things "break".

The contributor for not using his "real name" but instead of using a pseudonym to distinguish himself from the rest of the "John Doe" on the planet ( Everyone that use a pseudonym are doing so to conduct in illegal behavior right. "pizza" must be a bad person not a persons that simply loves pizza's. ) or simply create PR with a fix that broke other stuff hence you must have been doing so for illegal purposes not because you simply knew anybetter. etc. etc. etc.

As things are progressing, accountability/liability will enter the F/OSS world regardless of anykind of "license" that the project uses.

1. https://security.googleblog.com/2021/02/know-prevent-fix-...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 15:59 UTC (Thu) by pizza (subscriber, #46) [Link] (27 responses)

> As things are progressing, accountability/liability will enter the F/OSS world regardless of anykind of "license" that the project uses.

Given that "the software is provided as-is, without any warranty whatsoever", it will be a pretty hard to convince a court that the software owners/maintainers/authors/contributors/roommates/pets/toasters are somehow liable for an end-user's choice to use that software for safety-critical tasks, much less commercial inconvenience.

Absent a contract, the standard license disclaimer applies.

(and any laws mandating strict liability to software authors will completely tank the economy from top to bottom, and such will be strongly opposed by pretty much everyone that's not a law firm seeking to cash in)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 16:40 UTC (Thu) by johannbg (guest, #65743) [Link] (26 responses)

An legislation ( like a real names policy ) will just be introduced and implemented which circumvents that. One way or another accountability ( thus liability ) will find it's way to F/OSS.

I would not be surprised if software ends up being heavily regulated in the future in similar manners as building codes/control/regulation and anything that does not follow that regulation could be deemed illegal and at that point whatever license ( open/closed ) the project has, has become irrelevant.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 17:41 UTC (Thu) by pizza (subscriber, #46) [Link] (25 responses)

> I would not be surprised if software ends up being heavily regulated in the future in similar manners as building codes/control/regulation and anything that does not follow that regulation could be deemed illegal and at that point whatever license ( open/closed ) the project has, has become irrelevant.

I'm reminded of a saying attributed to the former ruler of Dubai:

"My grandfather rode a camel, my father rode a camel, I drive a Mercedes, my son drives a Land Rover, his son will drive a Land Rover, but his son will ride a camel"

Strict regulation/liability will pretty much end the "Software industry" overnight, and take most of the economy down with it.

You'll never get me to say that the "Software industry" is healthy, but this cure will be vastly worse than the disease, and everyone (especially those who control the levers of power) knows it.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 18:20 UTC (Fri) by khim (subscriber, #9252) [Link] (24 responses)

> Strict regulation/liability will pretty much end the "Software industry" overnight, and take most of the economy down with it.

Only if you would introduce these strict regulations overnight. But then, if someone introduced today's rules for building 300 years ago… economy would have crashed, too.

> You'll never get me to say that the "Software industry" is healthy, but this cure will be vastly worse than the disease, and everyone (especially those who control the levers of power) knows it.

At some point they would have no choice. Cost of programming errors grows larger and larger the more interconnected the world becomes. At some point either new software would have to become more reliable… or we should stop using it. The attempt to cause former may lead to latter… but government would try, anyway.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 20:03 UTC (Fri) by johannbg (guest, #65743) [Link] (23 responses)

Parts of the industry already are regulated and we are at the point where cars are being recalled due to failing software in brakes <--- so it does not take more then couple of EV Hyundai's [1] of the world or AI/self operated vehicles out of control and without brakes, plowing through series of kindergarden schools or someone simply placing a signal jammer at a busy intersections which causes a vehicle to lose it's "connection" to the "cloud" and <bam> for the entire software industry to become heavily regulated.

It's inevitable evolution that the industry will be regulated ( which already gradually has been happening ) and people will be held accountable for what they write regardless under which license the program they wrote ends up having so if people think that F/OSS licensed program will somehow be exempt from those eventual rules and regulation they are sadly mistaken. The blame game ends with the person that wrote the code and the people that reviewed and committed it as Volkswagen already has proven [2].

1. https://thedriven.io/2020/12/22/hyundai-recalls-kona-elec...
2. https://www.reuters.com/article/us-volkswagen-emissions-s...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 20:50 UTC (Fri) by pizza (subscriber, #46) [Link] (19 responses)

> It's inevitable evolution that the industry will be regulated ( which already gradually has been happening ) and people will be held accountable for what they write regardless under which license the program they wrote ends up having so if people think that F/OSS licensed program will somehow be exempt from those eventual rules and regulation they are sadly mistaken.

"Held accountable" how, exactly, and to what extent? Criminal liability resulting in jail time or fines? Or some sort of civil liability? Either way, where does this "accountability" end?

> The blame game ends with the person that wrote the code and the people that reviewed and committed it as Volkswagen already has proven [2].

What VW did wasn't a "coding mistake" - It was outright fraud, willfully perpetrated all the way up to the highest executive levels across at least two continents.

The "engineer" in the article didn't write a single line of code; he instead he was acting in a managerial role and signed off on stuff he knew was false -- in other words, he was one of the folks who actively lied to regulators and directly committed a crime.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 21:43 UTC (Fri) by johannbg (guest, #65743) [Link]

> "Held accountable" how, exactly, and to what extent? Criminal liability resulting in jail time or fines? Or some sort of civil liability? Either way, where does this "accountability" end?

Some form of liability. The real name policy ( like google was advocating for ) are designed and implemented to go after a person(s) not a program(s).

The vehicle example I gave it's not an unlikely to happen in a F/OSS world [1].

I'm pretty sure further down the line we will see some examples in court, that show how far the term "coding mistake" can be stretched and I have no idea what exactly role he played within VW but initially VW blamed a "rogue" team of engineer within the company but obviously the entire developer team who wrote the code should have gone to jail it's not like they did not know it was unethical and illegal. All of the persons involved could have said no I wont do that, I quit or get someone else to do it etc. it's not like they can hide their heinous act behind some "management" decision.

1. https://www.autoware.org/

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 22:36 UTC (Fri) by kleptog (subscriber, #1183) [Link] (17 responses)

> "Held accountable" how, exactly, and to what extent? Criminal liability resulting in jail time or fines? Or some sort of civil liability? Either way, where does this "accountability" end?

The question of liability is the easy part and already done: whatever entity sells the product/service to the customer is liable. They bear full responsibility. They may try to legally pass off the risk onto any of their suppliers for different parts, but for the consumer that is irrelevant, they sue the entity they bought the product/service from. If the supplier chooses to use Free Software, then they are also liable for its proper functioning, unless they can find someone to accept the risk for them.

Regulations would be required if it turns out that just apportioning liability is not enough. Since no manufacturer is ever required to use free software, I don't see how there is likely to be any issue there.

Now, software companies get away with disclaiming liability because that's seems to work and not too many people complain about it. Regulations tend to be written in blood so unless we see a situation where lots of people start dying due to dodgy software I don't see a lot of regulation occurring outside of where it is already, eg cars, planes, medical devices.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 23:04 UTC (Fri) by johannbg (guest, #65743) [Link]

Well I see plenty of use for regulation on any software used in nations critical infrastructure sectors and depending on the country and which sector a lot of F/OSS software might be in use.

You see, the things is that governments have a tendency to become very unhappy if they suddenly find out that someone, out of kindness of their heart was helping them backup their data or ensure they meet their environment commitments by shutting down the nations power grid etc. despite it all being done in good faith and the nations best interest. :)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 0:23 UTC (Sat) by pizza (subscriber, #46) [Link] (15 responses)

> The question of liability is the easy part and already done: whatever entity sells the product/service to the customer is liable. They bear full responsibility. They may try to legally pass off the risk onto any of their suppliers for different parts, but for the consumer that is irrelevant, they sue the entity they bought the product/service from. If the supplier chooses to use Free Software, then they are also liable for its proper functioning, unless they can find someone to accept the risk for them.

See, I'm fine with that. There's an existing supplier<->customer relationship.

But how exactly can I, by virtue of writing a random pile of F/OSS that I published on my web site, when I have no relationship with the end-user or the company that manufactured $widget_x, be held liable for that widget failing and injuring the end-user, especially when I state up front that my pile of F/OSS comes with no warranty, little testing, and might randomly fail and kill your cat? [1]

If I'm to be held liable for <whatever> then I need to be compensated in proportion to that risk, or I'd have to be downright stupid to do it.

> Now, software companies get away with disclaiming liability because that's seems to work and not too many people complain about it. Regulations tend to be written in blood so unless we see a situation where lots of people start dying due to dodgy software I don't see a lot of regulation occurring outside of where it is already, eg cars, planes, medical devices.

Sure, but it's not "software" failing, it's "safety-critical product X (that happens to be partially implemented using software) failing".

That's the key difference. You're not regulating *software*, you're regulating *product that contain software*.

[1] Yes, I actually state that in the documentation. Should I somehow be liable because the user didn't read the documentation too?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 8:30 UTC (Sat) by johannbg (guest, #65743) [Link] (14 responses)

You do realize that you are the in the role of the manufacturer right?
You wrote the code.
You made the code publicly available/accessible.
You end up having to take responsibility/accountability for it.

If a person creates a lemonade and that person then makes her or his lemonade publicly available and accessible under a sign "Free lemonade, drink at your own risk" and all the kids in the neighborhood see "Free lemonade" on the street corner, grab themselves some lemonade and start dying because that person decided to mix some acid into his or hers lemonade.

Is that person free of any responsibility and accountability because that person put up a lemonade stand and had a sign that says "Free lemonade, drink at your own risk" and allowed everyone to watch how she/he poured acid into the lemonade as she/he was mixing the lemonade.

The answer to that is no and one of the reasons why food regulations and laws surrounding that exist in the world.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 11:00 UTC (Sat) by mpr22 (subscriber, #60784) [Link] (13 responses)

That scenario displays clear mens rea (the acid was deliberately introduced to the lemonade) and actus reus (the lemonade was offered with the intent that it would be consumed, and it would be reasonably foreseeable that serious injury or death would result from such consumption) for multiple counts of murder under the English common law (let alone modern food safety statutes and regulations), making it profoundly unsuitable for a discussion of the standard of liability where malice is not obviously feasible.

What we're talking about here is more along the lines of:

Person (not necessarily natural person) 1: "I released some software for free. I haven't prepared a formal proof of its correctness, nor an evidence package illustrating its suitability for safety-critical uses, so it's unsuitable for use in safety-critical systems unless you are willing to do all that work yourself."

Person (not necessarily natural person) 2: "Don't care, I'm using it in my driverless car anyway."

Person 2 is the natural target (and, in practice, probably the only worthwhile target) for civil proceedings.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 11:00 UTC (Sat) by mpr22 (subscriber, #60784) [Link]

"not obviously feasible" <---- Er, I meant "not obviously present". Brain. Fingers.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 13:31 UTC (Sat) by johannbg (guest, #65743) [Link] (10 responses)

What we are talking about is holding people accountable for their actions. No "license" is going to prevent that.

As software becomes more and more integrated part of the society, more rules,regulation and laws will be built around that.

If it did not do that then everybody would just F/OSS their software ( corporation and people alike ) and be free from any accountability.

It might not be what people like or want but it's inevitable evolution, just as has happened in every other industry.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 15:02 UTC (Sat) by mpr22 (subscriber, #60784) [Link] (1 responses)

If you incorporate software into your product without verifying its safety, then in the first instance the natural civil liability for any defective behaviour arising out of faults in the software that you incorporated into your product lies with you, even if someone else wrote it (especially if they wrote it in their spare time, before your product even existed as a concept, and distributed it for free).

Now, if you can prove a sufficient degree of negligence, recklessness, breach of contract, sabotage, and/or fraud on the part of whoever made the software, then it would be natural for some portion (possibly as much as 100% depending on the nature of the defective behaviour, the nature of the tortious conduct, and the adequacy of your attempts to guard against defects in the software) of the liability be transferred to them.

Any idiot who decides it would be a good idea to grab a pile of pre-existing code written by a no-name rando with a net worth of a few thousand dollars and stick it in their widget without stopping to ask themselves "should I really trust, without further verification on my part, this software written by some no-name rando whose response to a liability claim would be to declare bankruptcy because the legal fees alone would be more than their combined net worth and gross annual salary?" needs to reconnect with reality.

Woodruff: Weird architectures weren't supported to begin with

Posted Dec 22, 2022 6:25 UTC (Thu) by mrugiero (guest, #153040) [Link]

> Any idiot who decides it would be a good idea to grab a pile of pre-existing code written by a no-name rando with a net worth of a few thousand dollars and stick it in their widget without stopping to ask themselves "should I really trust, without further verification on my part, this software written by some no-name rando whose response to a liability claim would be to declare bankruptcy because the legal fees alone would be more than their combined net worth and gross annual salary?" needs to reconnect with reality.

And that's, IMO, where the regulation should start and stop.
You provide a paid product or service? Either you can prove who provided your supplies or you're directly liable. In other words, a proper regulation wouldn't be about what's required for you to do something, but what's required for those who are supposed to "guarantee" it works correctly.
One of those requirements is have a face and name for everyone involved in the making. If you can't do that for an open source project, then don't use it for your product or you're liable for any defects it may have. The author of such code should still be one with full choice about whether to remain anonymous, it's you as downstream that's responsible of picking only the public ones. Anyone who decides to use the anonymous' code is legally responsible for doing so.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 16:25 UTC (Sat) by pizza (subscriber, #46) [Link]

> What we are talking about is holding people accountable for their actions. No "license" is going to prevent that.

Remember that absent the "license" you don't have the rights to use my software, period. Only that license gives you that right.

In exchange for the right to use my software, you have you have to agree to not hold me liable should your house burn down.

You chose to use my software, and your house burns down.

Who is "accountable" here, the software author who explicitly stated, in advance, that their stuff can't be trusted, or the person who chose to use it anyway?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 21:29 UTC (Sat) by farnz (subscriber, #17727) [Link] (6 responses)

Equally, though, at least in my jurisdiction, liability laws for anything only expect you to be liable for reasonably foreseeable consequences of your actions, and do allow for disclaimers of liability even for things like buildings and cars. Not total disclaimers, but (for example) the skylights for my house have a legally enforceable disclaimer of liability for faults other than manufacturing defects and undisclosed issues with the design.

The much more likely model for software liability here would be to copy consumer goods (everything from pens through to cars); in that model, the default liability for any product is limited to the price paid to you, and a full refund for the faulty goods is normally the limit of your liability. The only time where you have further liability beyond the purchase price paid to you is when you could foresee that the part you supplied would, when used correctly, cause the extra costs that the buyer has incurred; one way that you can foresee such things is if the buyer explicitly tells you about those extra costs.

So, for example, I buy a washing machine; it doesn't wash clothes and doesn't function. The seller's liability to me is limited to a full refund. I tell the seller that I need a washing machine that works because if I can't wash my clothes tonight, I'm going to have to spend £100 on new clothes tomorrow, and that is enough that the seller has to either refuse to deal with me (allowed, of course!), or is liable to me for £100 above a full refund.

And note that these only apply if the seller is acting as a business, not as a private individual; if I sell you my home-made 6m band dipole aerial, then caveat emptor applies (unless I'm producing aerials as a business).

For a practical example of how all of that interacts, consider the engines sold by Honda for other people to build into products. If I buy an iGXV800 from Honda on their normal terms, they are liable to me if the engine does not function, or fails in use, to the cost of refunding me for the engine in full. However, if I build that engine into a motorbike of my own design and sell it to you, Honda don't acquire any additional liability; if you have a catastrophic engine failure while riding it, Honda are still only liable for the cost of the engine, as they did not foresee the extra consequences of a failure while using it in a motorbike (it's not sold for that use). Had I bought a Honda CB300R motorbike, and sold it to you, Honda could now foresee the consequences of a catastrophic engine failure while riding it, and would have the extra liability that results.

Translated to open source terms, most projects would have no liability still - the default position is that you're liable to supply a full refund if it doesn't work, but as no money has changed hands, that's a non-effect. Samsung would be on the hook, however, if the software in a Samsung phone does not work, even if it includes open source, because money changed hands; even then, the normal limit is the money I paid for the phone. Tesla, on the other hand, could be on the hook for far more money, even if their FSD software is mostly open source, and even if the failure is caused by an open source component, because Tesla could predict that it might crash. The developer of (say) a computer vision component used in the FSD software, however, is only liable to Tesla for what they agreed to (as part of a contract), or the money paid by Tesla to them for the software (probably nothing if it's open source).

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 22:28 UTC (Sat) by johannbg (guest, #65743) [Link] (5 responses)

If we take for example this latest hack into Microsoft Exchange server [1][2] which has compromised over several hundred thousands instances worldwide ( which seems to be somewhat hushed since this is probably bigger hack then the solarwinds one ), I would not be surprised that the "manufacture" of a software will be held liable for "faults" in the software ( negligence ) in the future.

Arguably Microsoft in this case should be held accountable for their own negligence towards the US government, it's tax payers ( which probably have spent billions in license fees ) and of course the rest of the world as well.

People also need to realize that as open source has become more widespread and used, it has also
become increasingly affected by societal issues, including, both ethical and political issues ( US gov + Huawei + Google case is probably a good example of such political issue ), all of which will affect how the future framework ( rules and regulation ) surrounding it and how the rest of the software sector is shaped in the future.

https://cyber.dhs.gov/ed/21-02/
https://www.microsoft.com/security/blog/2021/03/02/hafniu...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 23:25 UTC (Sat) by mpr22 (subscriber, #60784) [Link] (1 responses)

The natural strategy for dealing with liability issues around Exchange is to say "this is a commercial product offered for sale, so no, you cannot in fact waive the expectation that what you provide is of merchantable quality".

That liability model breaks down with free software because identifying an entity to which you can both reasonably(1) and usefully(2) attach civil liability will frequently lie somewhere between "difficult" and "impossible".

(1) "Reasonably" meaning that it is fair and equitable to hold the identified entity responsible in tort for the incident that has occurred.

(2) "Usefully" meaning the plaintiffs have a realistic prospect of recovering a useful percentage of their damages from the defendants identified, rather than just bankrupting the defendants to the sole benefit of the lawyers.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 7, 2021 14:11 UTC (Sun) by Wol (subscriber, #4433) [Link]

And identifying the individual concerned will lie between "difficult" and "impossible".

Take sendmail (seeing as we're talking about MS Exchange Server) as a case in point.

Allman wrote it in the kinder, gentler days of the gentleman's internet. Lots of people modified it to do things Eric never thought of. Then came the crackers who abused it.

Is it Allman's fault - for not forseeing the future? Is it the fault of the people who re-purposed it to suit themselves? Is it the fault of the distros, or the software repositories, who made it freely available? Is it the fault of the people who didn't understand how to configure it securely?

Even identifying who those individuals are is fraught with problems.

Cheers,
Wol

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 23:44 UTC (Sat) by farnz (subscriber, #17727) [Link] (2 responses)

And, taking that Microsoft Exchange server issue as an example, Microsoft are the final vendor; by default, unless they could reasonably foresee the issue, they'd be liable to at most a full refund for the licence fees everyone has paid them for licences for those instances. Of course, they could be liable for more - but no amount of disclaimers will limit their liability below the sum paid in my local jurisdiction.

This doesn't mean that it will affect open source - Exchange is a product, but there's no liability on (e.g.) the RSGB for publishing circuit diagrams in RadCom that could be dangerous if constructed badly. That said, it will affect people like Canonical, SUSE and Red Hat - if you're selling open source software (even just as a bundle with support), you become liable to ensure it works, or to refund people.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 7, 2021 0:07 UTC (Sun) by pizza (subscriber, #46) [Link] (1 responses)

> But no amount of disclaimers will limit their liability below the sum paid in my local jurisdiction.

This is a key point -- If your jurisdiction decided to change the law to override disclaimers of warranty and/or liability waivers, it would affect far more than the likes of Microsoft or "software". I don't think it's an exaggeration to say that it would send most of the economy to a screeching halt, and any software/products/services offered under the new regime will come with a _much_ higher price point, proprotional to the heightened, un-waiveable liability the seller/producer/manufacturer is potentially responsible for.

This distinction is why you can buy device that measures your pulse for $20, but a "medical device" that does the same thing (with the same fundamental components!) costs $2000. The "medical device" is sold with explicit promises of merchantability, reliability, and accuracy, and there are major penalties if it fails even if there was no malice or negligence involved. The development and certification process necessary to meet those requirements is quite extensive, and therefore quite expensive. Plus you have to carry significant amounts of insurance to ensure you can meet those liabilities.

> That said, it will affect people like Canonical, SUSE and Red Hat - if you're selling open source software (even just as a bundle with support), you become liable to ensure it works, or to refund people.

"works" for what purpose, exactly? That's going to have to be explicitly spelled out...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 7, 2021 20:13 UTC (Sun) by farnz (subscriber, #17727) [Link]

Disclaimers of warranty are already overriden by local law here, and have been since the 1970s (coming up to 50 years). A product has to be of a reasonable standard given the price charged, and to last a reasonable time, again taking the price into account; it is expected to function as advertised before the sale.

So, the $10 pulse oximeter I own calls out what I can expect of it on the packaging - high error margins, low reliability. There's no disclaimer of warranty, nor a liability waiver; instead, there's setting of expectations so that the company selling the device is clear that they're not liable for anything beyond the functionality of the device, and that the functionality is about what you'd expect for a $10 device. If it doesn't do what they've promised it does to a reasonable standard for a $10 device, then my options are limited to a repair, replacement or full refund at the vendor's discretion in the first instance (I get a right to a refund if they cannot repair or replace) - so $10 at most. The similar device a local hospital uses does indeed cost a lot more - but as you say, that is because they are promising a lot more.

The net effect is that companies become very clear about what functionality you can expect from a device, because full refunds are not something you want to give very often. For Canonical, Red Hat, etc, the result is that you advertise a lower expectation, because you can be held to that; so Exchange would have to be advertised as insecure to escape liability for the hack.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 16:18 UTC (Sat) by pizza (subscriber, #46) [Link]

> Person 2 is the natural target (and, in practice, probably the only worthwhile target) for civil proceedings.

I agree completely; (though "worthwhile" in this context should mean "going after anyone else will get you laughed out of court and on the hook for the other parties' else's legal fees)

Every single scenario suggested as a reason we need "accountability/liability" has been malicious in nature (ie involving mens rea &| actus reus). I have yet to see anyone explain what sort of liability should flow from accidents, why that should extend all the way to the individual software authors (instead of the "owners" of the software) and how the software profession can possibly survive that.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 21:18 UTC (Fri) by Wol (subscriber, #4433) [Link] (2 responses)

> someone simply placing a signal jammer at a busy intersections which causes a vehicle to lose it's "connection" to the "cloud" and <bam> for the entire software industry to become heavily regulated.

That should NOT be possible. And that's one of the big problems with this - too many people think this is the correct solution when it is provably disastrous. An autonomous vehicle should be exactly that - autonomous! If it relies on external back-up, then that backup will *inevitably* fail when it is needed. Muphrys law and all that ...

Cheers,
Wol

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 21:54 UTC (Fri) by johannbg (guest, #65743) [Link]

Well the industry does not like to talk about how you can easily fool or disrupt vehicle these days with signal jammer, 3d printed objects etc. since that does not quite sell the idea to investor and governments + We already have a fully autonomous vehicle with all the capable means you describe and they are called taxis...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 22:12 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

> Muphrys law and all that ...

Is this a British thing? I think you're referring to Murphy's Law. Muphry's Law seems to be:

> If you write anything criticizing editing or proofreading, there will be a fault of some kind in what you have written.

and a deliberate misspelling of the other one.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 16:56 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

We do this with our project. MinGW is not something we test or particularly care about. However, if there are simple patches that one can provide to make it work there (while not breaking everything else), we'll take them with the continued assumption of "we don't test that configuration". Patches trickle in occasionally for the setup, mostly after releases.

There are issues on some distros which toss in linker flags which break us (-Wl,--no-as-needed). However, given the way Python is "supposed" to work, there's one flag that can fix it for distros and the like[1], but for other distribution mechanisms, the flag just needs to be removed to work properly.

[1]Making it work with that flag "pins" the Python library that is used for the Python components rather than letting it "float" and use whatever the loading Python interpreter brings along.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 21:21 UTC (Wed) by rodgerd (guest, #58896) [Link] (1 responses)

> Meanwhile, you keep fixating on "untested binaries" -- So does the distro running the upstream test suite (if any) as part of its build sufficient to make those binaries "tested"? And if there's no test suite, what makes upstream's binaries any more "supportable"? I should point out that for the overwhelming majority of F/OSS, "upstream" does not provide anything more than bare source code. Does that mean that this software is inherently "unsupportable"?

Which, per TFA, is a problem, because "can be compiled to a binary" is all most distros are really offering because "this C code builds but who even knows what that means".

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 15:19 UTC (Thu) by farnz (subscriber, #17727) [Link]

And that, in turn, is only a problem because expectations from code authors at the top of the stream through users at the bottom of the stream were not aligned. If the distro authors depending on it working on obscure platforms were willing to put the work in to keep obscure platforms working (which could include forking the source), or if they were clear that obscure platforms could break at any time, and you'd keep all the pieces, it wouldn't be as loud.

Instead, we've had a lot of heat about how it breaks builds on obscure platforms, mostly (with the honourable exception of M68k, where John Paul Adrian Glaubitz has stepped up saying that he'll work with people to get M68k supported, but needs more time to get Rust on M68k Linux working well) from people who aren't willing to put any work in to keep the C code in good shape.

Fundamentally, it all boils down to who expects what from whom - and this episode has brought some implicit expectations people have that haven't been talked about explicitly before now to the surface. Hopefully, the OSS community as a whole will come to a conclusion on what expectations are reasonable.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 11:50 UTC (Wed) by Wol (subscriber, #4433) [Link]

For a perfect example one just has to look at the new article about pipewire.

A lot of people will be glad to see the back of PulseAudio, but seriously, PulseAudio's biggest problem was distros shipping broken code!

PulseAudio *required* that the systems it sat on top of did what they claimed to do, and all too often they were buggy and didn't ... and the distros were quite happy to ship beta PulseAudio without fixing the underlying layers first.

Cheers,
Wol

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 8:44 UTC (Tue) by pm215 (subscriber, #98099) [Link]

I'm not sure I'd take that bet. There have been significant numbers of people paid to get Linux ecosystem software working well on Arm for decades, covering not just the kernel proper but also gcc, llvm, gdb, Rust, Fortran compilers, browser Javascript engines, Java, fixing build issues, scanning distro package archives for programs with inline asm that would benefit from having Arm versions of that asm, and a lot of other stuff I can't think of off the top of my head because it's a bit more out of my area. Yes, absolutely there is reliance on and appreciation for individual projects and developers who add or improve Arm support because they personally wanted it, but there is also sustained corporate effort from multiple companies. Getting an architecture up from 'mostly seems to work' to 'parity with x86-64 on everything, including performance' doesn't happen overnight, even if the initial "boots and runs the basically-portable stuff" is a comparatively easy bar to clear.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 8:07 UTC (Thu) by jezuch (subscriber, #52988) [Link]

> On the other hand, there's something about this that bothers me: the more that this attitude becomes the norm, the more difficult it is for new architectures to gain ground in the first place.

I think there's a huge difference between new architectures, and old, obsolete ones. The former are actively worked on and the fact they exist means there's a significant reason for them to exist, and so there's likely a significant reason to support them. The latter are just... Obsolete. They no longer have much of a reason to exist. (Their users will vehemently disagree, of course :) )

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 19:01 UTC (Mon) by flussence (guest, #85566) [Link] (22 responses)

> I put this one last because it’s flippant, but it’s maybe the most important one: outside of hobbyists playing with weird architectures for fun (and accepting the overwhelming likelihood that most projects won’t immediately work for them), open source groups should not be unconditionally supporting the ecosystem for a large corporation’s hardware and/or platforms.

Isn't the whole root of this discourse that *Rust* is one of those?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 19:43 UTC (Mon) by farnz (subscriber, #17727) [Link] (21 responses)

What do you mean by "Rust is one of those"? It's not hardware, nor is it a large corporation's platform (it's a programming language with an open-source implementation). And the complaint from the affected users is that their weird hardware is no longer supported because pyca switched from being C + Python to being C + Rust + Python, but the affected users aren't in a position to enable Rust support for their weird hardware. It's easy to forget that bringing up open source on a weird hardware platform took time and effort to begin with - GCC did not magically get an ARM backend in 1987, someone had to code it - and that it will take time and effort to keep the weird platforms working.

We've been in a long period of stability when the frontends GCC has defined what was a usable language for open source - either it needed a GCC frontend, or it needed an interpreter that only depended on GCC, and the backends GCC has defined what CPUs and platforms were usable. That stability is coming to an end, as more people turn to LLVM as their preferred compiler/JIT framework instead of GCC. and the resulting languages turn out to be interesting to more than just academia. In turn, this means that people who have been able to free-ride on the work done on GCC for their architecture in the past are losing that free ride, and will have to pay the LLVM (and possibly Rust) taxes for their architecture. Note, of course, that if you were starting truly from scratch with a hobby OS and CPU ISA, you'd also have to port a compiler, a libc, and many more pieces.

In the end, this is a natural extinction event for many architectures - if you really need support for hardware that LLVM doesn't currently work with, you either need to do what people like John Paul Adrian Glaubitz are doing and step up to provide support for your hardware in LLVM (and possibly Rust), or you need to accept that it's a toy project, and that if you don't have the resources to follow where the big boys are going, then you're not going to be able to keep up with the modern world.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 21:09 UTC (Mon) by pizza (subscriber, #46) [Link] (13 responses)

> In turn, this means that people who have been able to free-ride on the work done on GCC for their architecture in the past are losing that free ride, and will have to pay the LLVM (and possibly Rust) taxes for their architecture.

The thing is, *everyone* involved is getting some amount of "free ride". Including the pyca authors.

(And speaking of folks using these old architectures, the remaining holdouts tend to be the ones who had historically provided said free ride. They're been putting in sweat equity, and telling them "you have to do a _lot_ more work now" is ... kinda crappy..)

> In the end, this is a natural extinction event for many architectures - if you really need support for hardware that LLVM doesn't currently work with

It also widens the chasm that any new architecture needs to cross in order to be viable.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:33 UTC (Mon) by roc (subscriber, #30627) [Link] (3 responses)

Requiring LLVM doesn't necessarily widen the gap, since we can probably eliminate the requirement for a GCC backend.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 10:00 UTC (Tue) by xophos (subscriber, #75267) [Link] (1 responses)

As soon as the linux kernel builds reliably with clang...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 18:29 UTC (Fri) by khim (subscriber, #9252) [Link]

Well… there are billions of people who use clang-built kernel today… thus should count for something…

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 13:05 UTC (Tue) by pm215 (subscriber, #98099) [Link]

I suspect most companies working on compilers would love to be able to focus on one compiler rather than two; but I would not bet on llvm taking over from gcc for generic desktop/server Linux in the next decade. Inertia is a strong force, it's currently only just verging on even technically possible (compiling the kernel with clang is only possible for a few architectures right now, I think), and for distros and server/desktop end-users there really isn't a strong motivation to change. (Contrast macos, Android and the BSDs, where the license is a motivating force.)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 10:41 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Same sorts of things have happened in the past with other architectures, too, though - the people who brought us VAX and PDP-11 support were told that they didn't matter any more, and they didn't choose to keep doing the work

That's why I talk about this as a natural extinction event - we've had a period where the free ride was relatively widespread, and now it's being narrowed again to just the platforms that people who do the work care about (whether that's because they can do the work themselves, or are willing to pay for it to happen). Once the kernel compiles with clang (or someone revives https://dragonegg.llvm.org/), then the chasm narrows back down - you have to get LLVM supported, but you lose the need to support GCC in return.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 16:17 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> Once the kernel compiles with clang
Android has been building production kernels with clang for 2 years.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 19:52 UTC (Tue) by flussence (guest, #85566) [Link]

Isn't LLVM *better* at compiling the kernel right now? IIRC the LTO patchset only works with clang.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 7:35 UTC (Fri) by dvdeug (guest, #10998) [Link] (1 responses)

Are you going to compile the GNU utilities on a compiler that wasn't supported to begin with? It seems weird to respond to a complaint about people compiling for unsupported architectures to propose that you can begin to compile everything with a compiler that isn't supported upstream.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 9:28 UTC (Fri) by farnz (subscriber, #17727) [Link]

It puts you in exactly the same position that you're in now, but with a suite of compilers that support all the languages you want - remember that while the GNU utilities are only supported upstream with GCC, nothing is supported on your platform until you have a working compiler.

Thus, you're providing the support for everything - and I would have as much time for you complaining that GNU utilities don't work on your new platform as I have for people complaining that Rust doesn't work on their niche platform (none at all - you're the one doing something that upstream isn't supporting, you deal with the fallout).

And depending on what you're doing, you might not need the GNU utilities at all - Busybox, for example, provides alternatives to the GNU utilities.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 10:53 UTC (Tue) by farnz (subscriber, #17727) [Link] (3 responses)

And also, a lot of the complaining is coming from people who haven't provided said free ride; the complainants talking about HPPA and S390 are literally complaining that it used to build on those architectures, but doesn't any more. I have a lot more time for the complaints about M68k support because (a) they're coming from people who actually used and bugfixed the software in past releases on M68k, and (b) they're actually working to find a way forward with Rust support for the platform.

If I flip the question round, it becomes clearer to me: what degree of effort does a project have to make to be compatible with platforms that are no longer properly updated? Are we choosing to limit ourselves to the versions of C and C++ that're supported by MIPSPro on IRIX because there are people who want to use it (there's a surprisingly vibrant IRIX hobbyist base), or are we going to say that you can use C99, C++11 and other more modern languages, and those platforms need to keep up or die off?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 15:38 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (2 responses)

> there's a surprisingly vibrant IRIX hobbyist base

Interesting. CMake dropped support for IRIX years ago (because we don't have anything to test with). Are they just using an old CMake version, only using software from that era, or doing everything with CMake-less projects (possible, but more likely a result of not updating software stacks I'd imagine)?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 9:14 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

Mostly running period-appropriate software on them, and either fixing things locally or not using modern software.

The systems involved tend not to be internet-connected because of this - the risk of running IRIX 6.5 on today's Internet is not worth taking.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 22:22 UTC (Thu) by ejr (subscriber, #51652) [Link]

It wasn't worth taking that risk then, either. xauth + at boot...

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 19:05 UTC (Tue) by flussence (guest, #85566) [Link] (6 responses)

> What do you mean by "Rust is one of those"? It's not hardware, nor is it a large corporation's platform (it's a programming language with an open-source implementation).

Java, Golang, C#, and Swift also have open implementations (some of them even bother to derive those from a specification), but most of us understand that “who owns C?” is a uniquely nonsensical question to ask.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 19:25 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (5 responses)

> but most of us understand that “who owns C?” is a uniquely nonsensical question to ask

That would apply to Rust as well since unlike other examples in your list, there is no single vendor driving the language.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 19:36 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link] (1 responses)

I think a lot of people still think of Rust as being something Mozilla is pushing, even though Mozilla has now laid off most of the Rust developers they once employed.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 23:48 UTC (Tue) by roc (subscriber, #30627) [Link]

I'm not aware of a single person Mozilla currently pays to work on Rust ... which is great. Rust grew up and Mozilla doesn't need to invest in it anymore.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 18:55 UTC (Sat) by Mook (subscriber, #71173) [Link] (2 responses)

There's an important difference between C and Rust at this point though; there are multiple production-level C compilers, but only one for Rust. That means that there is a chance that some group could take over Rust development in the future. In contrast, there are multiple production C compilers, some of which are closed source (and therefore difficult to take over).

That's probably reflective of Rust still being under heavy development; mrustc is smaller in scope (doesn't aim to do borrow checking), and still could not keep up to date with the current compiler. This is not really the fault of any party, of course.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 21:33 UTC (Sat) by farnz (subscriber, #17727) [Link]

On the other hand, that makes Rust the same as Linux; there's only one complete Linux kernel, but it's not exactly owned by any one company in particular. In theory, someone could buy out Linus and have the One True Linux Kernel, but in practice enough other people are involved that this is not a risk.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 23:05 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

There are efforts to add a Rust frontend for GCC: https://github.com/Rust-GCC/gccrs

Rust is getting pretty stable now, so this actually starts to make some sense. And unlike Go, Rust is a fairly straightforward language, it doesn't need a runtime or GC.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 19:21 UTC (Mon) by JoeBuck (subscriber, #2330) [Link] (10 responses)

The way to fix this issue, for free software that depends on Rust, is to port Rust to GCC so that its cross-compilation abilities can be used. There's a serious effort under way. Those interested in architectures that LLVM doesn't support could contribute to that effort.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 21:38 UTC (Mon) by rodgerd (guest, #58896) [Link] (9 responses)

You've managed to miss the most important point of the article: that these architectures have been providing security theatre by purporting to run software that just compiles, maybe runs semi-reliably, and certainly has no particular guarantee that it actually runs as intended.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:13 UTC (Mon) by pizza (subscriber, #46) [Link] (7 responses)

> that these architectures have been providing security theatre by purporting to run software that just compiles, maybe runs semi-reliably, and certainly has no particular guarantee that it actually runs as intended.

Uh.... can't the same be said for pretty much every other architecture? (Or indeed, for anything other than the specific system that the upstream CI system used to run their tests, if any?)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:22 UTC (Mon) by mathstuf (subscriber, #69389) [Link] (3 responses)

Basically, yes. Folks have been asking us to provide Python wheels for Linux/aarch64 for a while, but we have no way of testing it, so I don't want to put out an official wheel without some actual trust that it has a hope of working. Instructions for making your own wheel are available for those with the gumption to do that, but the thing is that any bug submitted is basically untestable by me.

*Especially* in a project full of C and C++ code that does low-level file I/O (especially binary), hardware interaction, or anything like that, anything not tested regularly is not trustworthy to the level people would like or even probably think it is.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 7, 2021 13:50 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

> Basically, yes. Folks have been asking us to provide Python wheels for Linux/aarch64 for a while, but we have no way of testing it

Um... aarch64 boxes literally cost a few quid at this point, now that a lot of the SBCs are aarch64. Surely it's easy to get a way of testing things there?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 7, 2021 15:25 UTC (Sun) by mathstuf (subscriber, #69389) [Link] (1 responses)

Our software needs OpenGL tested and generally need hefty hardware to do builds in a reasonable timeframe. We've gotten a few for other projects but we haven't gotten them for this one yet. First up is more macOS and Windows cycles since those machines already aren't keeping up.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 15, 2021 14:52 UTC (Mon) by nix (subscriber, #2304) [Link]

I'm sure you know this already, but the way one does this is to cross-build (or, since that is often difficult, build atop an ARM chroot with binfmt_misc set up to redirect things to qemu-user-arm): this works fine unless your build process requires something like extensive multithreading that qemu-user still does badly (so, 99% of build processes are fine). Then tests, and only tests, are run via ssh to an SBC which shares that chrooted filesystem with the host (and it's not as though OpenGL-capable SBCs are hard to find or costly, though if you want one with open source drivers that might be harder). This is not even slightly new in the embedded world: DejaGNU has long supported this model via its "remote board" concept.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:35 UTC (Mon) by roc (subscriber, #30627) [Link] (2 responses)

Yes. Rust at least supports several platforms as "tier 1" where tests are run and work is gated on tests passing. https://doc.rust-lang.org/nightly/rustc/platform-support....

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 23:15 UTC (Mon) by pizza (subscriber, #46) [Link] (1 responses)

Rust isn't saying that "we support on any random x86_64 Linux system" -- instead they are saying "we test on x86_64-unknown-linux-gnu 64-bit Linux (kernel 2.6.32+, glibc 2.11+)" with the unspoken implication of "if you're not using this, you're on your own."

In other words, deviate from that very-under-specified platform definition and you'll end up in a position where Rust "just compiles, maybe runs semi-reliably, and certainly has no particular guarantee that it actually runs as intended."

Given that most Linux users get their Rust toolchain from their distribution instead of upstream, they're all in the "no particular guarantee" territory since the distro build hosts are not configured identically to the upstream rust CI systems.

Mind you, that's not a bad thing in of itself, and it affects pretty much all distributed-as-source software [1] There's no particular guarantee that it worked on anything other than the original developer's system. Heck, even that's not guaranteed, if one reads the actual license text.

If upstream supplies a test suite then that can be used to build confidence that the software will "run as intended" on relatively random systems, but otherwise, caveat emptor.

[1] Granted, Rust is unusual in that it actually does need to know about the underlying CPU architecture and operating system/syscall interface..

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 23:45 UTC (Mon) by pizza (subscriber, #46) [Link]

Oh, I should clarify -- I'm not saying that Rust is somehow deficient here; quite the opposite! They make it very clear what they consider "supported" and provide a test suite that anyone can use to (reasonably) guarantee that things can still be expected to work if you deviate from the supported platform configurations.

This is the same situation where most F/OSS projects arguably fall; the authors only "guarantee" it works on the platforms they use for development, with some sort of test suite that is used to build confidence that can be expected to work on a relatively random system that still meets the minimum requirements.

But that doesn't somehow make compiling and using this software on relatively obsolete platforms "security theater".

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:24 UTC (Mon) by JoeBuck (subscriber, #2330) [Link]

No, I didn't miss it, not after decades of participating in gcc development back when many more active architectures were in use than currently, because it was the height of the Unix wars then (I 'm no longer active in gcc and haven't been for some time). Often bugs would only show up on some platforms. If there's an issue that only shows up on one platform, it might be an invalid assumption, it might be a compiler bug. Bugs on the minor platforms didn't hold up releases; upstream cannot promise that ports will work and can't pretend to. The major platforms had organizations willing to support developers using those platforms, the minority platforms were on their own. That was always understood. But embedded platforms are more likely to have unusual processors.

Pretty much the entire reason for the fight the author's rant is responding to has to do with lack of Rust support on some platforms. The point is that this is a fixable issue.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 21:00 UTC (Mon) by martin.langhoff (guest, #61417) [Link] (11 responses)

Here's a different take on this. If a software component run entirely or largely by volunteer contributions is picked as a critical dependency by distros, _expectations suddenly change_.

When the component wasn't critical, you could pick up a dependency on Rust and not worry. Now your component is part of the critical path of bootstrapping some distro on a CPU arch you don't care about. The distro maintainers didn't ask your opinion before drafting you into their critical path.

Now stuff you didn't know about is broken, and "it's all your fault".

And sometimes, the addition of a compoent is completely unintended, by transitive properties. Debian bootstrap set includes Foo, Foo uses Zoolib, and Zoolib adds a dependency on Yummylib.

Maybe we're missing (a) tooling to detect a new dependency in distro critical bootstrap set and (b) an etiquette in handling intended and unintended dependencies in the critical path.

"Hey, I'm a Debian packager and I am thinking of adding your component to our build this and that way. If it ever breaks on PPC, it'll stop our builds, is that ok?"

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 16:05 UTC (Tue) by LtWorf (subscriber, #124958) [Link] (6 responses)

It's other software that depends on it, not distributions.

Distributions package it because other software depends on it.

Do you think the author of a library should be asked "can I use your library?" by everyone?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 16:19 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (2 responses)

> Do you think the author of a library should be asked "can I use your library?" by everyone?

It's horrifying that we're even contemplating this. What happened to freedom 0?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 10, 2021 2:51 UTC (Wed) by milesrout (subscriber, #126894) [Link]

Freedom 0 doesn't say "you can use the software for whatever you like and I guarantee that I'll never release new versions of the software that are different in some way that is inconvenient to you".

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 10, 2021 12:05 UTC (Wed) by farnz (subscriber, #17727) [Link]

Freedom 0 still applies - you can use the software as supplied or as modified by you for any purpose you like.

We're talking about cases where the user of a piece of software wants some assurance that the people supplying them software won't make changes that make their life difficult. You can't just assume that the people you get software from care about your application of it to a problem; if you want them to care, then you need to make sure that they know you want them to care.

This whole mess is about two different groups having different understanding of what they have implicitly agreed to as part of one group using the other's software.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 16:55 UTC (Tue) by martin.langhoff (guest, #61417) [Link] (2 responses)

You are right, probably the burden is on software maintainers, but distro packagers/maintainers usually have a clearer view of what's core infra; they deal with bootstrapping, etc.

If/when library gets rolled into core infrastructure for the first time -- for a silly example, systemd starts depending on it -- I do think _it's a good idea_ to connect with the developers maintainers.

Etiquette and good communication has nothing to do with software freedom. It's about how we work together.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 17:16 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (1 responses)

Even so, IMHO it's fair for upstream to respond to that sort of outreach with "OK, good luck with that. If it breaks, you get to keep both pieces."

In that case, the distro or downstream project might want to reconsider using that library in the first place. That doesn't mean they shouldn't do it, just that they should be aware of the possible outcomes and how they plan to respond to those outcomes. For example:

1. The upstream library is so ludicrously simple that we can't seriously imagine it breaking.
2. We have to vendor it anyway because of $REASONS, so we'll "just" fork it if it breaks.
3. This is an optional feature, and we'll drop that feature if the upstream library breaks. For example, GNU Readline.
4. The upstream library is something like libc. If it breaks, then the entire platform is in trouble and we'll probably have to pull support altogether.

Of course, (1) is risky, but some people like to live on the edge.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 17:43 UTC (Tue) by martin.langhoff (guest, #61417) [Link]

Yes. And communication also gives upstream and downstream the ability to have better interactions

Now upstream knows that a risky change could blow up downstream. Or upstream could say - bad idea, it's known not to work on this out that context. It, help me validate whether in X context. Or, let's get you testing early before I release

For the opposite scenario, where stuff blows up and it's all finger pointing and griping... We'll, see the blog post above :)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 18:04 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link] (2 responses)

"Hey, I'm a Debian packager and I am thinking of adding your component to our build this and that way. If it ever breaks on PPC, it'll stop our builds, is that ok?

This is backward. The question should be "We're thinking of adding this new dependency to our software. Are we prepared to help support it on every architecture we care about, even if the current developers aren't?" That's the difference between being a partner and a leech.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 20:11 UTC (Tue) by martin.langhoff (guest, #61417) [Link] (1 responses)

If you _communicate_, then... collaboration?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 23:50 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Basically, you have a choice: you can be a user or you can be a developer. If you're a user, you have no obligation to the project you're using, but in return you get no say in the choices that project makes. If they abandon your platform, that's your tough luck. If you're a developer, you have an obligation to do your part to keep the project working, but in return, you get a say in the decisions the project makes.

If you choose to make a project part of your critical infrastructure, you need to become a developer rather than a just a user. Otherwise, you're going to be stuck if and when the project makes decisions that interfere with the way you need to use it. Only by being a developer are you in a position to change those decisions. If you aren't going to put in the work to keep the project working with your platform, you have no right to complain when your platform is dropped for lack of support.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 5, 2021 12:25 UTC (Fri) by matu3ba (guest, #143502) [Link]

> Maybe we're missing (a) tooling to detect a new dependency in distro critical bootstrap set and (b) an etiquette in handling intended and unintended dependencies in the critical path.

Regarding (a):
Build systems are a clumsy mess (due to different requirements), so the only way forward is to 1) define a standard where and what language-specific things to resolve on download like Rusts Cargo.toml. In a language-agnostic way that somehow can be combined, when you use different languages.
Until then things needs to burn very hot until maintainers can meet and accept such a common standard to resolve dependencies, because there are significant economic interests involved in keeping build systems incompatible.

The alternative 2) is to make build systems so trivially simple that the build system is a binary (zig approach). You could then strip out the URLs as identifiers abit like mapping the internet (but for dependencies). This is like the complex version of 1) and more unrealistic due to the economic interests.

Any repackaging only complicates the picture, but they could ship the dependency file with whatever binaries or libs they ship.

Regarding (b): LLVM is a big dependency that simplifies and breaks alot stuff for language developers (you dont need to care about codegen, but alot about regressions and keeping the singificantly big LLVM API working). The problem will remain, unless we have a simpler language-agnostic alternative to LLVM. My hope is that somebody figures out the memory and execution model (of LLVM or C) and either an alternative pops up or LLVM gets forked to reduce complexity.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:14 UTC (Mon) by rweikusat2 (subscriber, #117920) [Link] (10 responses)

As someone who recently has had the mispleasure to "bring up rust" on two seriously "weird" architectures (x86_64 running Ubuntu 14 and ARM64 using a somewhat outdated version of Yocto supplied by the hardware vendor), I suggest the following: If you happen to use a project which adds a gratuitious dependency on Rust, consider getting rid of it: The headeache you'll eventually find yourself with when using any other platform than "a recent developer laptop" is very likely not worth it.

NB: This is not a statement on the Rust programming language which has some interesting concept in it and I'd welcome an opportunity to learn and use it for something.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:23 UTC (Mon) by mathstuf (subscriber, #69389) [Link] (5 responses)

So, out of curiosity, what problems did you have?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 17:04 UTC (Tue) by rweikusat2 (subscriber, #117920) [Link] (4 responses)

In hindsight, not much: The LLVM shipped by Ubuntu was too old to compile the rust version I had to use, hence, I had to figure out how to compile a compiler in order to compile a compiler to compile an application (people wearing suits hate sentences like that). It took some time to work this out and to solve it (based on zero advance knowledge).

ARM/ Yocto was abit more complicated: Getting a suitable LLVM built there required adding some libraries, one of the rust modules lacked a library reference for all of rust to be linkable and finally, some disagreement between autoconf and rust what precisely constitues a valid "target triplet" for this architecture which could be solved by making a pretty minimal change to the rust build system code (written in Rust). But becoming familiar enough with all of the parts involved here in order to be able to work out what precisely caused the build failures and learning enough Rust to be able to understand and change code written in Rust was a bit of a steep learning curve.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 17:50 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (3 responses)

Why did you not download the precompiled binaries (via rustup or whatever) for Ubuntu? Also, the Rust repo has the LLVM it works with in the repository (via a submodule). Given the patches they've had to apply over time, using an external LLVM probably wouldn't work out *that* well anyways.

For the Yocto bit, there is a TOML file which can describe the target rather than patching the compiler, but agreeing with autoconf does sound like something might need to be tweaked there. I think there are those who are doing embedded development with Rust; asking on those channels for guidance would likely be how I would have handled it. But, I haven't needed to do anything like that, so I don't have much experience to offer here.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 18:13 UTC (Tue) by rweikusat2 (subscriber, #117920) [Link] (1 responses)

Why does answering such a question invariably lead to someone answering questions nobody ever asked?

Ubuntu has a rust package. Please ask the maintainer why he doesn't just tell its users to download something from the internet instead.

Assumptions hard-code in the build system can't be "overridden" via configuration files. That's - BTW - the very meaning of "hard-coded".

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 18:28 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Well, if you're looking to compile it yourself, the Ubuntu package was apparently not suitable. Was it suitable? Then why were you looking to compile it yourself?

> Assumptions hard-code in the build system can't be "overridden" via configuration files. That's - BTW - the very meaning of "hard-coded".

Like I said, I've not dealt with embedded details, but I have seen that new targets can be described via TOML files (up to LLVM configurability). But yes, if you're fighting internal logic, you're stuck with patching.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 7, 2021 23:01 UTC (Sun) by nix (subscriber, #2304) [Link]

> Also, the Rust repo has the LLVM it works with in the repository (via a submodule). Given the patches they've had to apply over time, using an external LLVM probably wouldn't work out *that* well anyways.

This has improved greatly of late: since about LLVM 10, a separately-compiled upstream LLVM works well enough. (IIRC, there are hardly any patches left in Rust's not-a-fork of LLVM any more.)

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:38 UTC (Mon) by roc (subscriber, #30627) [Link] (3 responses)

What's weird about Ubuntu 14 x86-64?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 2:46 UTC (Tue) by gus3 (guest, #61103) [Link] (2 responses)

The number of patches Ubuntu applies liberally for basically every supported platform. But that's a habit it learned from Debian. RH does the same thing, but on an enterprise level.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 12:39 UTC (Tue) by amacater (subscriber, #790) [Link]

Without Debian, three quarters of *everything* on weird architectures outside x86_64 (and maybe aarch64) wouldn't exist. It's worth thinking that Debian building on 10 architectures is actually a positive here

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 8, 2021 13:35 UTC (Mon) by gspr (guest, #91542) [Link]

For the vast majority of the software, Debian's patches are nearly trivial.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 22:15 UTC (Mon) by paravoid (subscriber, #32869) [Link]

The author seems to place a lot of the blame on downstream users for making ("bad") assumptions on what architectures one could use a library with. Statements like "[w]eird architectures weren't supported to begin with" in the title.

The problem with that statement is that apart from a few extreme cases in both ends of the spectrum, "weird" is a subjective term, and relative to the person, passage of time (in either direction) etc. (ARM was a "weird" architecture not too long ago, and RISC-V archs are currently arguable still "weird" as well).

Most projects do not have a set of "supported" architectures (also a subjective term, btw). Even pycryptography, which is one of the few projects that does have a "supported platforms" section (buried) in their docs, has that section starting "Currently we test cryptography on …" (test, not support) and then list exclusively x86-64 and AArch64 on various OSes, and nothing else (not even i386 or 32-bit ARM). Is one supposed to assume that support for anything but those two architectures can disappear at any moment? Would it even be as popular of a library if it never worked on anything but those two in the first place?

In other words, in the absence of a list of a supported architectures, it is not a "bad assumption" to assume that it would work on the architectures that its toolchain supports.

The author later on seems to confirm that point, by saying "LLVM does not support either native or cross-compilation to many less popular (read: niche) architectures". Effectively -and likely accidentally- setting the bar for non-"niche" to "what LLVM supports" instead of the more traditional "what GCC supports".

Despite a) being sympathetic to and in support of the pycryptography maintainers for wanting to use Rust here and preferring one set of trade-offs compared to another and b) being generally not super sympathetic to ports for long-dead architectures (like m68k), I thought that the author missed the mark here and used some pretty flawed arguments to support their case.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 1, 2021 23:38 UTC (Mon) by clugstj (subscriber, #4020) [Link] (1 responses)

Some of those "weird architectures" were the ones originally targeted for much of the "open source ecosystem".

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 0:18 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Some of those "weird architectures" were the ones originally targeted for much of the "open source ecosystem".

That's nice, but it's not important. The PDP11 was the granddaddy of Unix, but nobody insists we ensure everything can still run on a PDP11 for old time's sake. There's no particular reason we should ensure everything continue to be able to run on a m68k or S390 just because they were important once. It's sad to let them go, but they're Free Software's past, not its future.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 0:03 UTC (Tue) by pabs (subscriber, #43278) [Link] (8 responses)

Why is there such a thing as "supported architectures" in the first place? The approach of artificially restricting platforms where free software is available and works seems to be completely against free software principles.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 0:14 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (2 responses)

No one is saying that pycryptography (or any project) can't work on those platforms. What the upstream developers (or at least this one) are saying is that compiling the code for any arbitrary platform removes any confidence upstream may have for the code doing what is wanted. Upstream never guaranteed that HPPA or S390 would work in the future. Debian (and others) seem to have made that assumption and are now realizing that it was an assumption. If HPPA/S390 support is wanted, the path is to get LLVM and Rust to handle the platform (as those are the tools upstream wants to use going forward). The alternative is to find a replacement for the package for their purposes.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 10:08 UTC (Tue) by tim_small (guest, #35401) [Link] (1 responses)

Given that Rust has already been ported to 16 bit and 8 bit microcontrollers, and supports pretty much every other 90s era workstation/server RISC CPU architecture it doesn't inconceivable that PA-RISC could be added (if the PA-RISC community wanted to). As for S390, I dunno, but S390x is already supported.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 12:13 UTC (Tue) by joib (subscriber, #8541) [Link]

Nobody has said it's impossible. The problem is finding somebody willing to put in the effort to make it happen, and support it going forwards. For microcontrollers there's a huge community of both professionals and hobbyists playing around with them, as they are small, cheap, and can be made to do useful things. So it's not surprising there's interest in using Rust for programming them.

For obsolete enterprise hardware, not so much. I can certainly see the value in preserving these for posterity like many other artifacts we save in museums, but in that case shouldn't you have the period-appropriate software on them rather than an up to date Linux distro?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 0:53 UTC (Tue) by mpr22 (subscriber, #60784) [Link] (3 responses)

Variously, it can mean:

I don't test my software properly in the first place, but if it breaks on the platform I use it on, I guess I need to fix that.

I haven't got a Foobar9000 to run the test suite on.

I can't afford a Foobar9000 to run the test suite on.

I despise FooCorp with a grim and arctic fury and would never willingly interact with their hardware or operating systems. If you try to donate me a Foobar9000 I will use it as a paperweight / doorstop / space heater.

The rest of the software stack for Foobar9000s hasn't matured enough for me to be confident that my bugs are in my software rather than the toolchain, FoobarOS, or the Foobar9000 itself.

I didn't know anyone had a Foobar9000 that had enough RAM and CPU performance to usefully run my software.

I didn't know anyone still had a Foobar9000 in a production environment.

I thought the only remaining Foobar9000s were non-operational museum pieces.

This is literally the first time I've ever heard of a Foobar9000.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 9:43 UTC (Tue) by smcv (subscriber, #53363) [Link] (2 responses)

And some more:

I know the tests fail on Foobar9000 because a Foobar9000 user told me, but I have no idea what's different about a Foobar9000 or how to fix it.

This software is part of a desktop environment, why do you want to run it on a Foobar9000 mainframe?

This software is for scientific number-crunching, why do you want to run it on a Foobar9000 embedded system?

This software is only useful if you have a (USB port / GPU / webcam / bulk data storage array / ...), is it even physically possible to attach one of those to a Foobar9000 (mainframe / embedded system)?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 20:09 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (1 responses)

And you also have the technical issues:

The Foobar9000 does some things that probably made sense at the time (segmented pointers, sign-magnitude integers, the IA64 NaT bit, putting something interesting at address 0, etc.), but my code is sufficiently low-level (e.g. a GCC or LLVM backend) that dealing with those things is actually painful and I don't feel like bothering.

The Foobar9000 does not have (an MMU/software programmable interrupts/an FPU/etc.) and I'm not willing to (replace all calls to fork() with vfork()/insert gratuitous sched_yield()s everywhere/avoid using floats/etc.).

I've never heard of anyone running any sort of software on a Foobar9000 without using FooCorp's proprietary non-Unix operating system, and I'm not porting my software to that because I would basically have to rewrite the whole thing from scratch.

I've never heard of anyone running any sort of "real" operating system on a Foobar9000 at all (e.g. because it was purpose-built to do exactly one thing, like the Apollo Guidance Computer), and my code is obviously incapable of running on the bare metal.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 10:08 UTC (Wed) by farnz (subscriber, #17727) [Link]

Or, for the nastiest of the technical issues, a cheap thing to do on modern hardware happens to be really expensive on the Foobar9000 (context switching, floating point addition, integer multiplication/division etc), and you're reliant on it being cheap. So your install behaves just fine, but users on the Foobar9000 complain that your software is a disaster, and you don't understand why.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 2, 2021 1:40 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Calling something an unsupported platform doesn't mean the software doesn't work there; it just means the developers aren't making any promises about it working. They have finite resources and aren't obligated to spend them ensuring their program works on every system under the sun. If you run an obscure, exotic, or obsolete system, it's up to you to make sure the software you need runs there.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 2:39 UTC (Wed) by Subsentient (guest, #142918) [Link]

Ugh.
Look, I like Rust. I use Rust. But the portability complaint people are making is legitimate.

You're doing it wrong. Instead of trying to remove support for all
of these architectures, help LLVM and Rust add support for them,
or even help GCC gain support for Rust, which is my personally preferred solution.

The answer to "my language doesn't run on X" is not "fuck you for using X", it's "let's add support for X!"

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 11:39 UTC (Wed) by ceplm (subscriber, #41334) [Link] (3 responses)

> Open source was supposed to be fun!

Talking about bad assumptions: if you are writing a security software for fun, then there is something wrong.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 16:32 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Different strokes for different folks. I find some people's choice of "fun projects" to be extremely dull. As I'm sure others do of mine. So what?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 22:56 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

Many fun activities will mean doing things that are less fun to enable them. Boundless hedonism is not generally what people mean when they say "fun". Perhaps they enjoy painting, and likely they don't find properly cleaning the brushes enjoyable, but they clean the brushes anyway because they see that as necessary to enable the fun bit. A picnic on the beach is fun for many people, getting sand out of everything not so much.

Raymond Chen has a concept of "paying taxes" as the parts of software development which are necessary (for quality of implementation) but generally not fun, like cleaning the paintbrushes. We should embrace tools (like Rust, or say, git) that reduce the amount of time we spend paying taxes without producing worse results.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 15, 2021 5:48 UTC (Mon) by flussence (guest, #85566) [Link]

And furthermore, Open Source was never meant to be fun. It was meant to gentrify Free Software, make it marketable to people who don't know what fun is.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 11:57 UTC (Wed) by xnor (guest, #125308) [Link] (3 responses)

The real question is: why is Python such a bad language that common libraries cannot be implemented in it without ending up with abysmal performance?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 3, 2021 19:36 UTC (Wed) by NYKevin (subscriber, #129325) [Link]

As the length of any discussion of Python increases, the probability of performance being brought up approaches 1. It's Godwin's Law for Python.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 1:11 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link] (1 responses)

Python is not a bad language. It's a language that has decided to optimize for programmer efficiency rather than program efficiency. That's a perfectly reasonable decision to make. To make the language practical for a broader range of applications, the designers have made it comparatively easy- and a part of accepted programming practice- to rewrite performance critical code in higher performance languages. That provides a very nice overall package. Programmers can write a whole application in an easy to write and maintain language and then rewrite performance critical parts as they decide it's necessary. This goes along with Knuth's maxim that premature optimization is the root of all evil; it lets programmers focus on making it work first and making it fast later as necessary.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 6, 2021 0:08 UTC (Sat) by xnor (guest, #125308) [Link]

> It's a language that has decided to optimize for programmer efficiency rather than program efficiency. That's a perfectly reasonable decision to make.

Except for the large parts where it's the other way around. Evidently, many python devs and users don't accept this "perfectly reasonable" decision because they don't want to sacrifice performance. They want it both ways.

But Python doesn't satisfy that requirement which makes it a bad language.

> the designers have made it comparatively easy- and a part of accepted programming practice- to rewrite performance critical code in higher performance languages.

It's a practice imposed upon Python devs because the language is bad. That it is an accepted practice is irrelevant. It's a bad practice. The problem in the article is just one consequence of this.
Instead of having a clear goal to be only used for prototyping and projects where performance and efficiency does not matter, they worked around the language's bad design and performance issues instead of fixing the language.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 3:47 UTC (Thu) by Vipketsh (guest, #134480) [Link] (3 responses)

Open source works because we can rely on each other. If we can not, we can't build on each other's work either and such there can be no open source system.

In this particular case, like many above have stated, pycy did nothing wrong. At least in a "violation of agreement" sense. On the other hand if you are part of a community your actions have effect on others and even if you have all the right to take the action you did you may gravely effect some other part that you rely on, possibly indirectly and others are well within their rights to be unhappy and complain about it. Unfortunately for you as developer of something once you put yourself part of a community you don't get to choose who relies on your work in what way. That's a burden you get as part of the community. Also important to consider here is that it wasn't some random unknown project that was put in a bind, but distributions that, more than likely, the pycy developers rely on themselves. While I don't think pycy should be held accountable to fix downstream, rants like these which essentially say "I do what I want and I have a right to it, everyone else sod off" don't go down well with anyone.

If a distribution can not rely on python or whatever to bootstrap it, what is the suggestion they do ? Write a full scripting language and supporting libraries from scratch ? Despite what it sounds like, I ask this question honestly. No-one even attempting to help the other side answer this is what I see as the biggest problem here: the pycy developers made a choice that effected others (distros) and I don't see any willingness on their part to help find a suitable solution or even compromise.

I wonder what sort of rant the author would make if distributions made it very difficult for python to work on them ? Would it be as simple and casual as "they don't owe us anything, we'll just move all development, infrastructure, etc. to *bsd/windows/whatever" ?

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 6:41 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (2 responses)

This is not a matter of shared values or "being nice" to each other. It's a matter of economics.

Yes, economics. Economic theory is not about money. It's about the distribution of scarce resources, and developer time definitely qualifies as a scarce resource. If developers spend time working on X, then they cannot spend that time working on Y. At some level, there needs to be a process for choosing which of X or Y is more valuable. In this case, X is "supporting old architectures" and Y is "never having another memory safety issue again." pycy chose Y, because they are a security project. It's kind of their job to prioritize that sort of thing. You may disagree with that reasoning, but it's their time to spend as they see fit.

> No-one even attempting to help the other side answer this is what I see as the biggest problem here: the pycy developers made a choice that effected others (distros) and I don't see any willingness on their part to help find a suitable solution or even compromise.

Traditional capitalism would answer that by saying "You want to spend developer time on X? Then go do it. Don't complain that somebody else decided to spend *their* developer time on Y instead of X. If you can't or won't find the developer time to work on X yourself, and can't or won't come up with the money to pay someone else to do it for you, then you apparently don't want it badly enough for it to be worth doing." Complaining of a lack of "compromise" is empty; a compromise generally involves both sides making concessions to reach an agreement, but in this context, it would instead involve upstream making unilateral concessions, and downstream... complaining less loudly? That's not an agreement, it's a gift.

Given how decentralized the open source world is, I'm having a hard time seeing a viable alternative to the capitalist approach here. Are we going to have some kind of Grand Council of Open Source Projects that goes around ordering projects to work on X and not on Y, for the good of open source as a whole? That doesn't make any sense to me, and frankly that's not what I signed on for.

Would it be nice if the pycy developers had voluntarily prioritized X over Y? Well, maybe. But that's pretty subjective in my opinion. I think that "Y is more important than X" is a valid position for pycy to take, particularly when we're talking about internet security. And seeing as it's their project, I'm reluctant to second-guess them from afar. Does this cause problems for downstream? Of course it does. But "not causing problems for downstream" was part of X, and pycy picked Y instead of X. pycy decided that this was not a sufficiently important consideration for their project, and they have the right to make that choice.

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 9:17 UTC (Thu) by Vipketsh (guest, #134480) [Link] (1 responses)

To me your arguments are less "capitalist" but more "individualist": the individual (pycy) matters and any issue they cause to others is in no way their problem. This only works when everyone else is working together except for you. Other's opinion probably differ, but I for one can not see any future for Opensource if this will be the predominant way of doing things. (I see taxation and explicit rules as a vital part of being "capitalist": both make sure to limit/fix damage caused to others. Neither exist in this context.)

Imagine a world where all projects regularly behave this way: libraries shift ABI, compilers deciding not to compile things, X/wayland changing protocols, linux breaking user-space ABI, desktop changing shortcuts, web browsers refuse to render sites, etc., etc. All these projects, taken individually, are well within their right to do any of that -- I am not arguing against this. Yet, I can not imagine how anyone could have any semblance of a working system, let alone be secure. This eventually comes back to the pycy developers: if there is no working system, they have nowhere to develop it. This is the compromise I mentioned: pycy assists distributions in some way and in return distributions keep the pycy developer's systems working (to be explicit: I assume they develop on linux).

An analogy. I decide to make a huge compost heap on my property, but right next to the fence with my neighbour and 3 meters from their bedroom window. I am perfectly within my right to do this. A court of law may even officially make a statement to that effect. Yet, the relationship with my neighbour has gone sour because the stench, visuals and rodents are leaking into his house. Now, the only way to not make the relationship sour is if I understand and accept when my neighbour explains to me that my actions are effecting him and then I, with my own power, move the compost heap someplace else. No amount of screaming "I have a right to this" or "if you don't like it pay yourself to move it" is going to fix things. This case is not entirely dissimilar.

"never having another memory safety issue again."

Is this truly an honest statement ? As I understand it pycy didn't reinvent itself in rust -- the majority of their code stays in C just that some (minor?) part of it is now in rust. So: majority of code probably with memory safety issues, a minority without. Hardly a sign of "never having memory safety issues again".

Woodruff: Weird architectures weren't supported to begin with

Posted Mar 4, 2021 16:10 UTC (Thu) by farnz (subscriber, #17727) [Link]

Python cryptography is moving from C to Rust slowly, which is why there's only a small amount of Rust right now (at the time of the fuss, literally an empty module that existed to test that you could build Rust code and access it from Python). They intend to remove all the C code over time, but instead of dropping that on people in one fell swoop, they're starting with a release where nothing goes wrong if you can't build Rust code, and thus enabling the discussion about whether Rust support is good enough to let them use it to happen now, before they've switched lots of stuff to Rust, not in the future when they don't have C to replace the Rust with.

The problem is that, having done this, they got several sets of responses (once you strip out the flames):

  1. Alpine users being tripped up because their CI wasn't expecting this dependency, and Alpine's Rust story has only very recently stabilised.
  2. Debian users being tripped up because the early version needed a newer Rust compiler than Debian stable.
  3. Dr Glaubitz for M68K saying "I'll be OK with this in the future, but not just yet - please hold off while I get the M68K Rust story sorted out".
  4. Various people getting upset about "another dependency" to keep using the py-cy project.
  5. Gentoo's maintainer getting upset that Portage had a hard dependency on py-cy for all architectures, but they don't support Rust on all the architectures they claim to support Gentoo on.

Of those, Alpine and Debian users got changes to help them out. I lost track of what happened with M68k in all the heat, but I think Dr Glaubitz is taking a reasonable approach, and I'd hope the py-cy upstream is giving him the time needed to sort out the Rust on M68k Linux story.

The "another dependency" people are on a quest to reduce dependencies regardless of the outcome of the project - I think that, in general, that's a problematic view, because it sees the cost of the extra dependency, but not the benefits.

Finally, we come to the "weird architectures" that Gentoo supports and that were put at risk by this change. There's a lot of heat generated here (and from the Alpine users whose CI broke), but the summary is that they aren't happy that obscure systems might stop compiling the code, because then they will definitely get reports of issues from cross-compiling CI, whereas right now, they don't get complaints, but don't know if it's used by anyone, or even if it works on the obscure platforms. All they know is that when py-cy is a mix of Python and C, it compiles, and when it's a mix of Python, C and Rust, it doesn't, and they feel no responsibility to come up with a better option that "don't do that change that you think is an improvement, because it stops it compiling on an obscure platform that I want you to support for me".

This is an intractable problem - you have someone who's been depending on upstream not changing things in a way that breaks them, but who isn't able to help upstream with their project; they are merely demanding that upstream gets their approval on changes that might stop things compiling on platforms that may not even work any more, and may not have any users. It would be nice if upstream could keep things working, but equally, which is a better use of upstream's time: improving things for the 99% of their users who have working Rust compilers, or keeping things in a good state for the few who neither have a working Rust compiler, nor care enough about their platform to get Rust working?

This is going to be a recurring question as Rust matures - it's the first programming language that's capable of replacing C and C++, and that has built up the developer momentum to actually have the potential to replace them. Java wasn't capable of being a replacement for C and C++ (the chunk of runtime that couldn't be written in Java was too large); D never built up the momentum. Basically, if you use a weird platform, what exactly are you owed by upstream?


Copyright © 2021, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds