|
|
Subscribe / Log in / New account

Conill: The long-term consequences of maintainers’ actions

Ariadne Conill looks at the difficulties caused by the OpenSSL 3 transition in the context of Alpine Linux.

For distributions, however, the story is different: cryptography moved to using Rust, because they wanted to leverage all of the static analysis capabilities built into the language. This, too, is a reasonable decision, from a development perspective. From the ecosystem perspective, however, it is problematic, as the Rust ecosystem is still rapidly evolving, and so we cannot support a single branch of the Rust compiler for an entire 2 year lifecycle, which means it exists in community. Our solution, historically, has been to hold cryptography at the latest version that did not require Rust to build. However, that version is not compatible with OpenSSL 3, and so it will eventually need to be upgraded to a new version which is. And so, since cryptography has to move to community, so does paramiko and Ansible.


(Log in to post comments)

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 16:55 UTC (Fri) by smurf (subscriber, #17840) [Link]

Thanks.

Personally I'm somewhat annoyed at the MariaDB folks about this. Didn't we all learn anything from the last time a transition had a deadline? Like, Y2K?

2038-01-19Z03:14:07 will be a freakin' heap of fun …

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 17:09 UTC (Fri) by LawnGnome (subscriber, #84178) [Link]

My experience of working adjacent to OSS projects in companies is that it's almost impossible to get resources for this kind of work in advance.

A good tech lead can advocate all they want for "$X is going to have a release in six months, and it's going to break things, and we need to start working on it now", but even well-meaning PMs hear "in six months" and decide that they can slip in a couple of other projects first because they're "high priority for customer $Y". $Y pays a lot of money. (Or, worse, promises they eventually will.) Then one of those projects slips.

After that, said tech lead gets to sit in a meeting with a couple of layers of management and be asked why they weren't ready for $X 2.0, and they have to try to not use too many creative swear words. All while the silent part of their user base suffers.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 18:40 UTC (Fri) by walex (guest, #69836) [Link]

«even well-meaning PMs hear "in six months" and decide that they can slip in a couple of other projects first [...] After that, said tech lead gets to sit in a meeting with a couple of layers of management and be asked why they weren't ready for $X 2.0»

What this means is that project managers are not accountable for deciding to ignore platform changes because they are expensive, so in an ideal world there should be no platform changes. Not exactly a news problem in organizations that turn insufficient depreciation into a "profit" stream, and there are so many in a decadent economy. The same happens on a wider scale with missing road and bridge maintenance, unwillingness to increase capacity with load etc. etc. etc.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 19:18 UTC (Fri) by Vipketsh (guest, #134480) [Link]

Your analogy is very wrong.

It's more like all roads and bridges were in private hands and then law makers would, on a yearly basis, change the standards for them essentially requiring a full rebuild each and every year. Don't do it and your bridge or road gets shutdown.

While I know full well how clueless and idiots managers can be, I also can't really fault them in their thinking. When you see that every few years the team needs to spend months tweaking existing working things while making no explicit bug fixes or improvements, and the result is more buggy than before, there is real value in asking oneself if the work is really needed now or perhaps it would make sense to do the tweaks next time the dependency decides to reinvent itself. You can sell "the dependency was young, it had issues we spent a bunch of time working around, but now with the new version things will be better going forward" only so often. It becomes increasingly difficult when the dependency is 25+ years old and it's the third or fourth such situation (I'm looking at you GUI libraries).

In this case distributions are in the same position as managers and I can full well see their point.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 18, 2021 17:16 UTC (Sat) by jwarnica (subscriber, #27492) [Link]

Understanding the implications of reality, and of a software life cycle, is a management responsibility though.

Begging the whiners, of course, if a developer in abused for pointing out this week's problem then it's time for them to find a new manager.

There are reasons to take on responsibility for all dependencies (safety critical systems), but they are few and far between. Software is dependent on its build tools, dependent on its platform, each on another platform, OS, hardware. It is expedient to use those systems that exist; management must understand that.

Developers should be honest up front about the expected or contracted lifecycle, and management must accept that risk.

"It's time to do the upgrade" should be met with healthy doubt, but not abuse. And once stated, its managements problem to schedule time.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 17:25 UTC (Fri) by josh (subscriber, #17465) [Link]

> and so we cannot support a single branch of the Rust compiler for an entire 2 year lifecycle

This hasn't been that much of an issue for other Linux distributions. Yes, newer software requires newer Rust, but stable distributions typically keep the same base version of Rust and the same base version of other software.

This issue would only apply to a distribution that wants to keep Rust fixed at a specific version for 2 years, but wants to do not just security/bugfix updates but actual *feature* updates of other software. That isn't going to work, but it doesn't work with other software either: new feature releases of software often need upgrades to their dependencies.

If you want to freeze one version of Rust for 2 years, freeze the software depending on it as well. And it's absolutely reasonable to expect software written in Rust to not add requirements on new features in bugfix/security releases.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 17:45 UTC (Fri) by engla (subscriber, #47454) [Link]

> If you want to freeze one version of Rust for 2 years, freeze the software depending on it as well. And it's absolutely reasonable to expect software written in Rust to not add requirements on new features in bugfix/security releases.

There's another important special-case software (one of the true gems of the ecosystem!): Firefox, and it also needs frequent updates. For this reason, most distributions have already had to deal with changing Rust requirements.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 19:43 UTC (Fri) by ariadne (subscriber, #138312) [Link]

> This hasn't been that much of an issue for other Linux distributions. Yes, newer software requires newer Rust, but stable distributions typically keep the same base version of Rust and the same base version of other software.

It isn't that it is an issue per se, it's that policy (both in rust, and in Alpine) is presently not quite on the same page. Other distributions begrudgingly upgrade rust in already released branches when needed, but only because they have to in order to support the security updates of other software. Firefox is an example here.

I would personally love to see Rust have a full support lifecycle in Alpine. This will probably come as Ferrocene plays out, which will provide LTS versions of Rust. I expect most distributions will switch to using Ferrocene, and this will all be behind us at that point.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 19:52 UTC (Fri) by josh (subscriber, #17465) [Link]

> Other distributions begrudgingly upgrade rust in already released branches when needed, but only because they have to in order to support the security updates of other software. Firefox is an example here.

This seems like a fundamental incompatibility between Firefox's security model and the distribution security model. Upgrading to a new major version of something can mean upgrading to new versions of its dependencies. That's true for any piece of software.

Perhaps it would make sense to have a version of Rust in the distribution used as a dependency of Firefox (and anything else that uses the "you must upgrade to a new major version" security model), and a version of Rust used for everything else that follows the distribution security model (maintain the same major version, incorporate minimal security/bugfix patches).

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 20:07 UTC (Fri) by ariadne (subscriber, #138312) [Link]

In Alpine, we have two repositories that are included in each release branch. There is main, which has a minimum 2 year support lifecycle for all packages. There is also community, which has a minimum of 6 months, the cadence at which new branches get cut.

For volatile packages like firefox and chromium, we carry them in community, which means that desktop users are expected to follow the latest stable release branch.

At present, Firefox has begun to relax their requirements for a supported rust compiler, as the features they need begin to land. I suspect that Firefox will wind up supporting the ferrocene compilers once they are released, but that’s just speculation.

So really, the convergence of an LTS lifecycle for rust and downstream software supporting it, is near. We just have to be a little more patient.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 20:23 UTC (Fri) by josh (subscriber, #17465) [Link]

Would it make sense to have a current version of Rust in community for use by firefox, and a frozen version of Rust in main for use by packages in main?

Conill: The long-term consequences of maintainers’ actions

Posted Sep 17, 2021 21:01 UTC (Fri) by wahern (subscriber, #37304) [Link]

How do distros handle backporting bug fixes, especially for dependencies through build systems like cargo? Does cargo let you patch source files locally? Or is none of this actually possible without, e.g., simply forking all the upstream repositories for a project and aliasing everything to your own forks?

Conill: The long-term consequences of maintainers’ actions

Posted Sep 18, 2021 4:27 UTC (Sat) by josh (subscriber, #17465) [Link]

Distributions need to be self-contained and use local sources, including for dependencies. They shouldn't retrieve anything from the Internet during a build. Cargo has a "directory registry" mechanism that allows saying "if you'd get a crate from crates.io, get it from sources in this directory" instead.

(That said, we need a better mechanism for handling downstream patches, because right now distros just patch version 1.2.4 while pretending it's still 1.2.4. We need a way to create 1.2.4.1 instead.)

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 0:26 UTC (Sun) by robert_s (subscriber, #42402) [Link]

It's becoming increasingly unrealistic for distributions to diverge from the large statically-linked dependency sets dictated by golang, rust (and also java come to think of it) projects. On top of that, upstream projects are often hostile to downstream packagers making alterations like these.

But I think we're in for a shock the first time there's a really serious vulnerability in a widely used e.g. golang library. At that point, what do projects using that library do? Quietly bump their version, *maybe* make a note in the changelog, and move on? Or do like they should and release their own CVE? If they *don't*, what is the mechanism through which their users are supposed to get the nudge to upgrade? If they *do*, I'm certain the CVE process wouldn't be able to handle the avalanche effect from a widely used package. Do the users of the "recommended" distribution means (the official docker image, "just grab the binary"...) get the nudge somehow?

Or is the answer the increasingly familiar hand-wave of "just always run the latest version"? Because that might *sound* like an answer but it doesn't connect with the reality of running a system comprised of N thousand separate packages.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 3:39 UTC (Sun) by ilammy (subscriber, #145312) [Link]

> On top of that, upstream projects are often hostile to downstream packagers making alterations like these.

Is being unwilling to support hostile? It's understandable that upstream does not really want to support the entire combinatorial explosion of their package and all possible dependency versions, focusing on the latest ones where possible. Resources are scarce, their time is better spent elsewhere. From upstream viewpoint, if distro packagers need to support older versions of dependencies for whatever reasons they have, that’s their burden, not a responsibility of upstream.

It’s just that with modern languages it’s not unrealistic to have hundreds of transitive dependencies. In ye olden C dayes, you’d have just a handful of direct dependencies and an occasional transitive one here or there, with exceptions being rare. Now popping libraries like candies is easy, so that’s what everyone’s doing, without much regard over the stable maintenance burden they impose onto packagers by doing this. Developers use the latest version, it’s natural for them. Not so much for consumer users.

The question here is whether the distro packaging process is scalable enough. Arguably it isn’t. You can’t just take existing maintainer and say, “Look, you took care of ${software-package} and its dependencies. Now there are x10 more dependencies, you take care of them too, will ya?” So with the advent of software using more dependencies, you need to increase the maintenance team size too, so that each of those new mini-dependencies gets as much attention as a library got before, for the quality and speed of delivery to remain the same.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 13:59 UTC (Sun) by Vipketsh (guest, #134480) [Link]

> Resources are scarce

Why am I the only one who thinks this goes two ways ? An app or such not wanting to upgrade dependencies because it is work is reasonable as things stand today. But seemingly a dependency re-inventing itself causing tons and tons of cumulative resource burn for all downstreams is also reasonable, considering how no one complains about it.

Unless libraries keep some semblance of a stable public facing ABI things are going to be as bleak as the grand-parent post makes out. What's funny is that that exact situation has happened in the C world before: the fiasco that resulted when lots of projects are/were embedding zlib and a vulnerability was found therein. It took years to clean that mess up and instead of learning from that people are just setting up to the same situation all over again.

> upstream does not really want to support the entire combinatorial explosion of their package and all possible dependency versions

IMO, that's nothing but myth. The only versions (of dependencies and the app) that need to be supported are the ones that people actually use (with few exceptions what distributions ship) and there are not a lot of those. I also don't think upstream needs to be as forceful about versions as many seem to be. The linux kernel folks have the right mentality: promise to fix regressions, but at the same they freely make changes without checking them with some "combinatorial explosion of versions". If it breaks your code, it is up to *you* to report it and work with the upsteram devs to get it fixed. Don't report it and/or don't cooperate then it doesn't get fixed. Projects taking this attitude would go a long way for downstreams and I don't think it would cause *that* much more work for upstream.

> Developers use the latest version, it’s natural for them

Unfortunately, it goes the other way too and then causes just as much pain for the user. There are many projects that moved or are still moving from Python 2 to 3 at the last minute. Or maybe even past that.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 14:39 UTC (Sun) by ilammy (subscriber, #145312) [Link]

> It took years to clean that mess up and instead of learning from that people are just setting up to the same situation all over again.

Package management infrastructure is different now. Thinks like Cargo make updating dependencies easier. It’s not a quest of figuring out which of your dependencies exactly embed zlib and how to update it for their bespoke build system. Another thing is this static linking by default approach which makes maintaining ABI compatibility less of a concern: just rebuild everything when you need to upgrade something.

That said, API breakage is still a churn and can still deter applications from upgrading dependencies. No good way around that. Except for maybe waiting out for some more until things stabilize and stop breaking that often.

> The only versions (of dependencies and the app) that need to be supported are the ones that people actually use (with few exceptions what distributions ship) and there are not a lot of those.

The thing is, the version that people use may be different, sometime drastically. From distro user’s perspective the latest version is whatever their distro packages. From developer’s perspective it might be what *their* distro packages. Or maybe something closer to upstream.

> The linux kernel folks have the right mentality: promise to fix regressions

...in the next version of Linus’ tree, then maybe cherry-picked into Linux stable. Backporting the fix to 3.10-whatever version that RHEL ships is RHEL’s responsibility, not of the Linux kernel community. Same as resolving the issues from backports gone wrong that upstream kernel never had in the first place.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 15:31 UTC (Sun) by pizza (subscriber, #46) [Link]

> Package management infrastructure is different now. Thinks like Cargo make updating dependencies easier. It’s not a quest of figuring out which of your dependencies exactly embed zlib and how to update it for their bespoke build system. Another thing is this static linking by default approach which makes maintaining ABI compatibility less of a concern: just rebuild everything when you need to upgrade something.

A key difference is that Cargo only helps those already capable of recompiling the software. If you don't have the complete source code to _everything_ then you can't fix it yourself, period. You can't rely on a third party (eg distributions or some other system integrator) to update that one component and generate a new binary.

Rust (plus Go and all other static-link-only paradigms) makes you completely at the mercy of the software developer / vendor for all updates and fixes. That's what made the zlib thing so bad, and Cargo/etc won't change this -- even the relatively trivial "change your crate list to pull in a fixed version and recompile" actions represent more effort than has been historically demonstrated.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 15:47 UTC (Sun) by ilammy (subscriber, #145312) [Link]

> A key difference is that Cargo only helps those already capable of recompiling the software. If you don't have the complete source code to _everything_ then you can't fix it yourself, period.

But that’s already the case for FOSS distributions. They have all the source code available. They have the automatic build infrastructure. (Or well, should have it.) Rebuilding the world surely wastes time, electricity, bandwidth, storage space – more than ideally maintained dynamic linking ecosystem would – but it’s not impossible. Popularity of static-link-only paradigm shows that people are willing to rather accept some waste than maintain ABI compatibility.

> You can't rely on a third party (eg distributions or some other system integrator) to update that one component and generate a new binary.

But people rely on their distributions and vendors all the time to keep stuff updated.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 17:08 UTC (Sun) by pizza (subscriber, #46) [Link]

> But that’s already the case for FOSS distributions.

Much as I'd like it to be otherwise, FOSS distributions only ship a tiny portion of the software being written, and are not where most folks get their software. Indeed, this is gleefully called an advantage by the static-linked-world proponents ("cut out the middleman!") even though in practical terms this shifts F/OSS more like proprietary stuff.

But even putting aside proprietary software (which tends to fall more on the Apache OpenOffice side of the competency/responsiveness curve rather than LibreOffice side) the overwhelming trend is to push the F/OSS world into a app-store model with an all-in-one opaque blob per application that might as well be statically linked for all the technical ability an end-user has to modify things. The store itself has no ability to update anything; at best all it can do is scan for a known issue and prevent future downloads if a problem is found. Actually fixing things falls back to whomever "owns" that software's entry on the store. The software might as well be proprietary at that point.

We've been down this route before, more than once, and it has not gone well.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 22:09 UTC (Sun) by JanC_ (subscriber, #34940) [Link]

Automatic rebuilds would also trigger automatic updates everywhere much more frequently than they do now, and I can tell you that would upset pretty much anyone.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 15:48 UTC (Sun) by Vipketsh (guest, #134480) [Link]

> make updating dependencies easier

Granted, it's a easier to find dependencies of something, but that's still only part of the problem. If upgrading a dependancy is so easy as bumping a dependency version, there is no reason for projects to religiously want a single version either, but they do and they say it's for good reasons. The constant API changes that occur bring with it one of the issues that contributed to the zlib fiasco taking so long to solve: many projects embedding the library edited the interface and so finding the code was not enough -- extra work was needed to merge in the fix, same as if the new upstream's API changed.

> waiting out for some more until things stabilize

I wish this were more true, but that utopia never seems to come. For some reason, that I can not read out of any big picture, for GUI libraries (GTK & Qt) 25+ years of continuous development, 30-40 years of prior art and three or four major API changes were not enough for things to stabilise. In this time frame things didn't change that much either: we still use the same primitives (buttons, binary on/of switch, sliders, menus, etc.) as when GUIs started being a thing.

> it might be what *their* distro packages

What I'm saying is that there are not that many distributions and relevant versions of each. In other words, there is no "combinatorial explosion". Even between distros many are pretty similar in their package version choices (e.g. they often co-ordinate on Linux versions).

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 16:14 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

> I wish this were more true, but that utopia never seems to come. For some reason, that I can not read out of any big picture, for GUI libraries (GTK & Qt) 25+ years of continuous development, 30-40 years of prior art and three or four major API changes were not enough for things to stabilise. In this time frame things didn't change that much either: we still use the same primitives (buttons, binary on/of switch, sliders, menus, etc.) as when GUIs started being a thing.

Alas, the world changes out from underneath us over time. The kernel is lucky in that hardware is pretty much set in stone once it ships, so it's just dealing with adding new things (in general). Qt, on the other hand, had to rework for things like moving widgets around to where they better belong (the widget APIs don't change that much, so it's mostly going from `QtGui` to `QtWidgets` or something. These things have improved usability because using Qt's PNG support no longer needs to bring in the GUI bits (Qt5 fixed this IIRC). Now with OpenGL being not *the* graphics API, but *an* API, more rework is necessary. I don't know what features, exactly, precipitated Qt6, but the Qt *developers* are still trustworthy (alas, I'm not so sure about their business unit :/ ).

I don't know. I admire how long Qt can put off decisions to break and at least they're well-advertised and not buried in some patch release.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 19, 2021 22:16 UTC (Sun) by JanC_ (subscriber, #34940) [Link]

Also useful if both the older & newer API are supported for a while, and preferably the newer API has all the features the older one has, so that people actually get some time to port things within a reasonable time…

Conill: The long-term consequences of maintainers’ actions

Posted Sep 20, 2021 5:27 UTC (Mon) by rodgerd (guest, #58896) [Link]

You say that, and yet even after maintaining Python 2 and 3 in parallel - for more than a decade! - with 6 years notice of end of life for 2, the team got entitled shrieking and abuse when they finally shut down 2.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 20, 2021 8:09 UTC (Mon) by farnz (subscriber, #17727) [Link]

Which was especially egregious, since all the team did was say that they would not release new versions of Python 2, nor would they look to see if Python 3 bugfixes applied to Python 2.

And there are things like Tauthon out there that someone who really wanted to stick to Py2 could use and help support; the complaining was largely because PSF Python would no longer support Python 2.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 23, 2021 1:42 UTC (Thu) by JanC_ (subscriber, #34940) [Link]

I’m obviously not talking about dropping support for a past API after 10 years. But there are some that do that every single-digit number of weeks/months…

Conill: The long-term consequences of maintainers’ actions

Posted Sep 23, 2021 8:48 UTC (Thu) by NYKevin (subscriber, #129325) [Link]

I think the point is that *no* amount of notice will be universally acceptable. There's always somebody who's still using your ancient software (or hardware) that you told them to stop using years ago, because there's always some sector of the economy that doesn't monetarily benefit from upgrading just yet.

For example:

* Significant portions of the US financial system are or were recently running on EBCDIC-based systems, because that's the reason IBM gave for opposing the removal of trigraphs in C++17.
* In 2019, the US Air Force announced that they had figured out how to launch a nuclear missile without the use of floppy disks. 8-inch floppies, to be precise.
* Every single time Python is mentioned in any capacity in an LWN article, there's always a huge flame war in the comments over how terrible the 2-to-3 migration was, regardless of whether this actually has anything whatsoever to do with the contents of the article.

Conill: The long-term consequences of maintainers’ actions

Posted Sep 18, 2021 19:23 UTC (Sat) by tialaramex (subscriber, #21167) [Link]

I'm going to write two comments, this one is about a potential way forward and thus maybe is more constructive.

Here's what cryptography currently says about their Rust requirement:

"The current minimum supported Rust version is 1.41.0."

1.41 was released in January 2020, and as I write this it's September 2021 and the current Rust is, I think 1.55. So, although "the Rust ecosystem is rapidly evolving" that's not inherited from cryptography. They are clearly willing to live with a much impoverished older Rust in order to not burden people with constant compiler upgrades. Alpine is choosing not to take advantage of that.

It seems to me that Alpine absolutely *could* pick a Rust version to ship in main, even if they also ship a newer one for community. If it won't do this, it seems a shame to blame cryptography for it, when it is not of their doing. If some exciting new feature in community required some C++ 20 feature and thus a very new GCC that isn't in main, would Alpine now blame all the C programs in Alpine for requiring a C compiler, even though they don't need this very new version?

MISRA

Posted Sep 18, 2021 20:29 UTC (Sat) by tialaramex (subscriber, #21167) [Link]

However I also want to react to “They don’t run on C because C is a good language with all the latest features, they run on C because the risks and mitigations for issues in C programs are well-understood and documented as part of MISRA C.”

MISRA is to a considerable extent making up for inadequacies or outright failings in C. Not so long ago somebody wrote on Hacker News that their safety-critical firmware is in Rust and for certification they were asked to provide some similar document to MISRA so, they just took the MISRA C doc and crossed out all the parts which were irrelevant for Rust, which, of course, is most of the MISRA C document. They're not kidding.

Let's take MISRA's rules 16.x - MISRA jealously guards copyright over their document and so I shan't recite any of it, but all of these 16.x rules are about C's switch statement. Now, a very naive programmer might say "Rust doesn't have switch so that's irrelevant" and of course that won't do at all, MISRA isn't worried about the specific word *switch*. But, when we look at Rust's match we see that it covers off almost all of MISRA's concerns, not as optional warnings, but as facets of the core language design.

Some of the 16.x rules are forbidding things no sane C programmer would do, and many would be surprised to discover are legal, such as trying to declare variables in one part of a switch and then use them in a different part even though presumably either declaration or usage won't happen when the statement is executed. Sure enough these things aren't legal Rust. But what's much nicer is that Rust also covers off some MISRA rules that apply to real C code that real programmers write. MISRA is worried about two things C programmers actually do that can hide terrible faults. Falling through, and inexhaustive matching. The C rules cover these by requiring break; religiously for each clause, and by requiring default religiously in every switch statement.

For Rust's match there is no fallthrough, so MISRA needn't forbid it, and all matches are exhaustive. If your match isn't exhaustive it won't compile. In fact Rust goes further, for API stability Rust provides #[non_exhaustive] enumerations which says, as the programmer of this data structure I don't promise I listed all the options yet, and so if you match on this, you must have a default case. For any other structures, you can't add a default case (it's redundant and causes a compiler error) if you in fact exhaustively handled every case. So, you get the benefit of MISRA's exhaustiveness *and* the benefit of common C compiler warnings for unnecessary default that often must be disabled for MISRA code.

One last 16.x rules stands out. MISRA really wants you to write the default case last, or first, but certainly not in the middle where you might lose it. It seems as first as though Rust has no similar rule, and we might need a lint here. But no! If you try to write your default case before other cases in Rust they are made redundant, and the compiler warns you this is a bad idea. You can ignore that warning, but MISRA elsewhere tells you to act on all the warnings if possible.

Now of course many MISRA directives (in particular) can't be trivially enforced by tools, human judgement is required. But even here Rust is often far ahead of C. For example MISRA wants you to document and test things, Rust's infrastructure automatically infers Markdown format documentation, one extra / in the comment above your function and it goes from merely commentary in the source code to HTML documentation that's automatically built for publication. If you write *example code* in that comment just in the ordinary way you would with Markdown, the test infrastructure automatically tries to run the code to check it actually works at the same time it runs any actual unit tests your wrote.

MISRA

Posted Sep 19, 2021 5:14 UTC (Sun) by NYKevin (subscriber, #129325) [Link]

> Some of the 16.x rules are forbidding things no sane C programmer would do, and many would be surprised to discover are legal, such as trying to declare variables in one part of a switch and then use them in a different part even though presumably either declaration or usage won't happen when the statement is executed. Sure enough these things aren't legal Rust. But what's much nicer is that Rust also covers off some MISRA rules that apply to real C code that real programmers write. MISRA is worried about two things C programmers actually do that can hide terrible faults. Falling through, and inexhaustive matching. The C rules cover these by requiring break; religiously for each clause, and by requiring default religiously in every switch statement.

Just riffing off of this point: A switch statement is a computed goto in a funny hat. It happens to be the least-bad way of expressing "At compile time, I have N different possibilities, and I want to select and execute exactly one of them at runtime," but it isn't actually designed to provide that invariant. You can freely interleave case statements and any other program logic you like, leading to abuses like Duff's device (admittedly, an extreme example, which I have no doubt that MISRA forbids six ways to Sunday).

OTOH, a Rust match statement, to my understanding, *is* specifically designed to provide the "exactly one branch gets executed" invariant, and so you don't need (as many) rules telling you how to use it.

MISRA

Posted Sep 19, 2021 18:12 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

Technically match is actually an expression, and so it's logically _obliged_ to follow only one arm because the expression obviously only has one value, and in fact the values in each arm must be type compatible, the Rust compiler will conclude that whichever type is compatible with all the arms is in fact the type of the expression, if there is no suitable type that's an error.

https://play.rust-lang.org/?version=stable&mode=debug...

The compiler points out that two of the arms are clearly integers while another is an &str (in this case a string literal).

In many cases the idiomatic Rust way to express what you wanted actually does involve values from each arm, but of course it is possible for your match to be full of procedural code in which case those values are just the empty tuple -- or for it to have statements which outright leave the match (such as "return" or "break") and these are in fact type compatible with anything.

MISRA

Posted Sep 21, 2021 6:26 UTC (Tue) by flussence (subscriber, #85566) [Link]

> But even here Rust is often far ahead of C. For example MISRA wants you to document and test things, Rust's infrastructure automatically infers Markdown format documentation, one extra / in the comment above your function and it goes from merely commentary in the source code to HTML documentation that's automatically built for publication. If you write *example code* in that comment just in the ordinary way you would with Markdown, the test infrastructure automatically tries to run the code to check it actually works at the same time it runs any actual unit tests your wrote.

Slightly off topic, but after having to live with Raku's take on code documentation I'm kind of jealous of the batteries-included simplicity of Rust here.

Proposal for including rust in "main"

Posted Sep 24, 2021 9:10 UTC (Fri) by FSMaxB (subscriber, #106415) [Link]

It seems like there is now a serious proposal to include rust in the "main" repository of alpine after all: https://gitlab.alpinelinux.org/alpine/tsc/-/issues/21


Copyright © 2021, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds