|
|
Log in / Subscribe / Register

Linux-for-Rust or Rust-for-Linux

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 11:50 UTC (Fri) by jgg (subscriber, #55211)
In reply to: Linux-for-Rust or Rust-for-Linux by pbonzini
Parent article: Rust-for-Linux developer Wedson Almeida Filho drops out

If the ship has already sailed then really we need someone like Linus & co to stand up and clearly say those words to everyone, and repeat it a few times.

I think there is a sizeable contingent that does not believe that to be the case, and Ted's remarks of effectively wanting nothing to do with rust are not unique or unreasonable given the cloudy situation. I know enough people betting on the failure of this experiment.

IMHO the current situation of Rust does not look like success. It is basically unusable except for unmerged toy projects and it is still not obvious when that will change. Can we rely on RH10 kernel having full baseline rust support? If yes then in 4 years maybe the server/enterprise industry could actually we can take rust seriously.. Can rust even support the OOT backporting stuff everyone does? Is there a strong enough preprocessor? What is the situation with Android? Can an Android device vendor write a driver in rust today?

I've pondered if I should consider rust for some of the greefield projects I've done recently. iommufd in rust? Generic PT? But at the end of the day that work is being "paid" for by people who intend to backport it to old kernels. I can't write it in rust and meet that need. I bet a lot of people are in the same boat.


to post comments

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 13:29 UTC (Fri) by pbonzini (subscriber, #60935) [Link]

I am not going to be at maintainers summit but that would be a very good occasion for that to happen.

With a killer use case there's no doubt to me that RHEL 10 would be able to support Rust code. Right now as you point out there's none (the only non-toy one in the air is binder), but RHEL 10 will grow new functionality until 2028 so I wouldn't exclude that, especially for drm.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 15:21 UTC (Fri) by willy (subscriber, #9762) [Link] (32 responses)

I don't think we're at the point of "complete success" until the compilers catch up. I don't think, for example, that you can compile Rust for m68k or alpha at this point. I'd be happy to be told I'm mistaken.

Obviously, this is not a failure of the RustForLinux project. They have important work to do which is independent of code generation.

My biggest gripe is inline functions. It seems crazy to me that we have functions which are so performance critical that they must be inlined in C, and yet in Rust, we make function calls to an exported symbol.

And I haven't got very far through my Rust book. I did write a little userspace program (calculating Pascal's triangle) which went well enough, but most of what I was doing there was figuring out how to use the libraries rather than the kinds of things I'll need to do in the kernel.

And, yeah, I think what Ted did there was reprehensible. I'd've said so had I been in the room, but I was off in the MM track at the time.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 15:47 UTC (Fri) by jgg (subscriber, #55211) [Link] (22 responses)

I was in the room and sort of thought Ted was "capturing the mood" - lets be blunt - there are some loud voices in FS/Block that are very anti-rust.

As above, I think the Linux project needs to make a clear unambigous decision, or at least set a timeline to make it.

Either we are doing Rust and what Ted said is out of line, or we are not, and Ted's position is reasonable - do not burden busy people with a Rust experiment that is nobody is going to use.

I thought the point of this inbetween state was supposed to be doing experiments to decide if Rust is feasible. Is there a result yet? Your concern about inlines is news to me, that sounds kind of fatal honestly. What was the result of the nvme experiment? I saw some graphs that it was worse performance, that's not encouraging?? How much of the toolchain situation is sorted out? Last LPC they were saying constant upgrades required? GCC was working on a front end?

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 17:55 UTC (Fri) by Wol (subscriber, #4433) [Link] (5 responses)

> Your concern about inlines is news to me, that sounds kind of fatal honestly.

I was under the impression that - compiling huge monolithic blobs - Rust was quite capable of spotting and inlining functions like that all by itself. What's the point of an "inline" keyword if you can rely on the compiler to spot it?

(Dunno whether I like the downsides of huge blobs - it makes libraries more complicated - but it's horses for courses, or pick you poison, whichever suits ...)

Cheers,
Wol

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 20:12 UTC (Fri) by pbonzini (subscriber, #60935) [Link] (4 responses)

These are cross-language calls, so the call from Rust to C (say, to spin_lock()) will not be inlined.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 22:55 UTC (Fri) by roc (subscriber, #30627) [Link] (2 responses)

Mozilla has been inlining across the Rust-C++ boundary, using LTO, for five years: https://www.reddit.com/r/cpp/comments/ch7g6n/mozilla_just...

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 5:38 UTC (Sat) by pbonzini (subscriber, #60935) [Link] (1 responses)

Linux however it's typically compiled with GCC. I remember people working on LTO for Linux a few years ago but I think that's not particularly common.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 6:16 UTC (Sat) by roc (subscriber, #30627) [Link]

OK, but the important thing is that there's an obvious fix for cross-language inlining if that becomes important.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 18:02 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

Even without LTO, a simple practical solution might be to duplicate some of the hottest code (e.g. spin_lock) in Rust. It's not great, but most of such code has been stable for quite a while.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 22:46 UTC (Fri) by kees (subscriber, #27264) [Link] (15 responses)

We are doing Rust and what Ted said is out of line. :)

Yes, there continues to be technical issues to be worked out. This is no different from C. It's the nature of technology. We're always moving to new compiler versions, refactoring to get rid of bad APIs, etc.

And Rust has shown its strengths very well, IMO. The M1 GPU driver is excellent and in constant use by a large set of distro users. The Binder driver is a drop-in replacement for the C version and will be shipped in Android soon.

I think the characterization of these Rust drivers being "toy projects" is pretty wildly inaccurate, if not outright insulting.

It's just a big change that is taking time to get everything resolved. I don't understand the resistance to learning new languages, especially given the large developer community associated with Rust. Us kernel devs are always hoping to get more people involved in Linux... Why push people away like this, especially given how many problems Rust permanently solves?

But to answer your questions:

Yes, Rust is feasible.

The NVMe driver works fine and the interfaces needed for it are going through review, though it's an uphill battle politically: as you said there are a few very loud voices in FS/Block that seem to be overwhelmed by their work loads.

The NVMe performance graphs looked very encouraging to me! A proof-of-concept driver with no explicit optimizations is identical to the C performance, except in 2 cases: one where it's within roughly 2%, and in another was _faster_.

What do you consider "the toolchain situation"? It works fine for me. :)

Yes, the upgrade cycle is faster than for GCC and Clang currently, but this is already slowing as the language features needed for Linux are stabilizing, so now there is a minimum version needed, not an exact version.

Yes, GCC has, I think, 2 front-end projects for Rust. I haven't paid too much attention to this myself, though.

GCC and Rust

Posted Aug 30, 2024 22:53 UTC (Fri) by corbet (editor, #1) [Link] (12 responses)

I'm hoping to learn more about the state of gccrs at Cauldron soon. That is one of my biggest concerns with this whole thing... GCC support is needed to reach all of the targets supported by the kernel, but the gccrs project seems to be languishing with little effort going into it. Somehow, I think, we have to find a way to bring some resources to bear on that problem.

GCC and Rust

Posted Aug 31, 2024 18:43 UTC (Sat) by josh (subscriber, #17465) [Link]

> GCC support is needed to reach all of the targets supported by the kernel

rustc_codegen_gcc is making steady progress.

(That's leaving aside the question of how much value those targets provide.)

GCC and Rust

Posted Sep 1, 2024 11:38 UTC (Sun) by ralfj (subscriber, #172874) [Link] (10 responses)

I think the focus should be on rustc_codegen_gcc: reusing the entire Rust frontend and middle-end, and just replacing the codegen backend. That has a much higher chance of delivering a compiler that is able to keep up with Rust's development and access GCC's backends.

gccrs is attempting an entirely independent second Rust implementation -- that's orders of magnitude more work than rustc_codegen_gcc, and I don't think there are significant benefits that would justify the cost. (This is not to say the gccrs devs should stop, if they're having fun doing what they do then by all means continue, but in terms of where to allocate resources and where to watch for medium-term results, I think rustc_codegen_gcc is clearly the better choice.)

rustc_codegen_gcc is unfortunately held back by GCC's reluctance to provide a nice library API for accessing its backends, but it seems using the libgccjit library works reasonably well.

Standardization - two independent implementations are good.

Posted Sep 2, 2024 15:12 UTC (Mon) by jjs (guest, #10315) [Link] (9 responses)

gccrs being a second, independent implementation is good. It ensures that the specification is, in fact, clear. I've seen many projects (HW & SW) where the specifications seemed clear to the writers who did an implementation, but someone else followed the specifications, made something that matched the specifications, yet it was not interoperable with the original version. This is the nature of language - there's lots of places where the meaning of a word is not a singular, universally agreed meaning (check any dictionary).

Law dictionaries exist to help ensure legal language is precise & unambiguous. There's a reason IETF requires two, independent implementations before declaring something a Internet Standard - https://www.ietf.org/participate/runningcode/implementati.... If two implementations don't produce the same product, it's time to go back and fine tune the specification to clarify the ambiguities that arise. And the only way to check for ambiguities is via an independent implementation.

Standardization - two independent implementations are good.

Posted Sep 2, 2024 18:43 UTC (Mon) by ralfj (subscriber, #172874) [Link] (8 responses)

> gccrs being a second, independent implementation is good. It ensures that the specification is, in fact, clear.

It may do that. Or it may cause endless issues due to differences in behavior between implementations, as is the case in C. One reason why the standard leaves so many things as "Undefined Behavior" is that implementations happened to implement different behavior, and none of them wanted to change. It's easy for them to agree to make things UB, the consequences are beard by programmers... just look at the entire debacle with realloc-of-size-0 now being UB: https://queue.acm.org/detail.cfm?id=3588242

I don't deny that multiple independent implementations have advantages. But they also have serious disadvantages. And given the resources required to build and maintain them, I am not convinced that it's worth it overall. The fact that language implementations are typically open-source these days has removed one of the biggest arguments in favor of multiple implementations.

Standardization - two independent implementations are good.

Posted Sep 2, 2024 22:09 UTC (Mon) by jjs (guest, #10315) [Link] (2 responses)

Yes, they can define the behavior as UB -which means they've changed the spec. If you have a spec with defined behavior, and two implementations have different behavior, but meet the spec, you really have two choices, IMO -
1. Follow what appears to be the C way - declare it UB in the spec. Also, what I understand from this article & other things I've read about Rust that the Rust community is trying to avoid.
2. Clarify the spec. Choose which behavior is correct (or a third way), and rewrite the spec to clarify it.

In either case, the spec is changed. I suppose a 3rd way is to ignore the problem, but, IMO that's worse.

"The fact that language implementations are typically open-source these days has removed one of the biggest arguments in favor of multiple implementations."

I'll argue the opposite - it's the language implementations being open source is one of the biggest arguments in favor of multiple implementations. Look at what went on with Linux and GCC/LLVM as LLVM began to work to compile the kernel. More defined behavior, from what I can tell. And a huge advantage of open source is everyone can contribute.

Standardization - two independent implementations are good.

Posted Sep 5, 2024 11:40 UTC (Thu) by taladar (subscriber, #68407) [Link]

The C way was to declare it undefined behavior in the spec because the committee of representatives from multiple implementations that already implemented things differently failed to find a consensus whose code should change, not because there is ever any advantage at all in having undefined parts in a spec.

Standardization - two independent implementations are good.

Posted Sep 5, 2024 14:28 UTC (Thu) by ralfj (subscriber, #172874) [Link]

> If you have a spec with defined behavior, and two implementations have different behavior, but meet the spec, you really have two choices, IMO

That's not what happened here. In this case, the C standard was unambiguous since at least C89: "If size is zero and ptr is not a null pointer, the object it points to is freed". Some implementations violated the standard, and somehow it was deemed better to introduce UB into tons of existing code than to fix the buggy implementations.

Such a hypothetical case could of course happen, though. IMO in that case you have a buggy (unintentionally underdefined) standard -- which happens and which needs to be dealt with reasonably well. If you have multiple different implementations of the standard, they are very hard to fix (other than by making the standard so weak that it encompasses all implementations), and that explains some (but not all) of the oddities in C. If you only have a single implementation, it is a lot easier to fix such bugs in the standard/specification by adjusting either the spec (to still have a *defined* behavior! just maybe not the one that we'd ideally have liked to see) or the implementation. These kinds of things happen in Rust fairly regularly. A big part of what makes this possible is that we have the ability to add "future compatibility" lints to Rust so that there's many months or even years of advance notice to all code that might be affected by a compiler change. I worry that with multiple implementations, this kind of language evolution will become even harder than it already is due to the added friction of having to coordinate this across implementations.

Standardization - two independent implementations are good.

Posted Sep 2, 2024 22:40 UTC (Mon) by viro (subscriber, #7872) [Link] (4 responses)

When specification is "whatever the interpreter actually does", you get wonders like sh(1). Which is _not_ a good language to write in...

Standardization - two independent implementations are good.

Posted Sep 5, 2024 11:41 UTC (Thu) by taladar (subscriber, #68407) [Link] (3 responses)

Which is why there is an effort to develop a Rust spec but that still doesn't require a second implementation, just a test suite that checks if the one implementation conforms to the spec.

2nd Implementation tests the meaning of the specification

Posted Sep 7, 2024 17:10 UTC (Sat) by jjs (guest, #10315) [Link] (2 responses)

That test suite can determine if one implementation meets what the spec writers interpret the spec to mean. It can't detect if the spec always means what the spec writers think it means (the wonders of human language). The purpose of a second implementation is to check that the wording of the spec actually only means what the spec writers think it means. I.e. catch unseen errors in the spec. Again, there's a reason IETF requires two independent implementations of an RFC before they declare it a standard.

2nd Implementation tests the meaning of the specification

Posted Sep 7, 2024 17:40 UTC (Sat) by intelfx (subscriber, #130118) [Link]

Or you can simply have one team write the compiler (and maybe the spec) and some other team to write the tests using the spec.

2nd Implementation tests the meaning of the specification

Posted Sep 8, 2024 12:02 UTC (Sun) by farnz (subscriber, #17727) [Link]

That's where two implementations of the test suite comes in handy, since you now have two separate groups of people who've read the specification and agree on what it means; where one test suite fails and the other passes, you need to resolve that by either fixing the specification, or getting the passing test suite to agree that they had a gap in test coverage.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 2:03 UTC (Sat) by sam_c (subscriber, #139836) [Link] (1 responses)

> Yes, the upgrade cycle is faster than for GCC and Clang currently, but this is already slowing as the language features needed for Linux are stabilizing, so now there is a minimum version needed, not an exact version.

I think faster is being generous. Nightly crates are still being used and the minimum Rust version is still extremely recent and being cranked up regularly. I think it's inevitable that Rust will be used in the kernel, but I find it hard to accept it's mature enough to merit being there when so many unstable features are needed.

Rust also doesn't, AFAIK, have any LTS versions for its compiler.

Linux-for-Rust or Rust-for-Linux

Posted Sep 2, 2024 11:55 UTC (Mon) by taladar (subscriber, #68407) [Link]

What benefits do you expect from an LTS version for a compiler where the most recent version is supposed to compile all code previous versions compiled? Sure, there might be the occasional bug compromising that goal but that can (and does) happen with backports to LTS versions too.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 23:05 UTC (Fri) by roc (subscriber, #30627) [Link] (7 responses)

> I don't think, for example, that you can compile Rust for m68k or alpha at this point.

Museum architectures should use museum kernels. It would be madness to let a few hobbyists veto kernel improvements that would benefit all other users.

But also, Rust does support m68k: https://doc.rust-lang.org/rustc/platform-support/m68k-unk...
And hopefully gccrs will make these complaints go away for good.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 12:01 UTC (Sat) by pizza (subscriber, #46) [Link] (5 responses)

> Museum architectures should use museum kernels. It would be madness to let a few hobbyists veto kernel improvements that would benefit all other users.

Except for the little detail that "museum architectures" (and the long tail of old drivers/filesystems/etc) are part of the mainline kernel.

Where do you draw the popularity line? Currently it's at "someone is actively maintaining it."

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 21:25 UTC (Sat) by roc (subscriber, #30627) [Link] (4 responses)

I'm actually paraphrasing Linus: https://lkml.iu.edu/hypermail/linux/kernel/2210.2/08845.html
> At some point, people have them as museum pieces. They might as well run museum kernels.

If new hardware hasn't been sold for 20 years then I think that's probably a good enough line.

Linux-for-Rust or Rust-for-Linux

Posted Sep 1, 2024 18:01 UTC (Sun) by willy (subscriber, #9762) [Link] (3 responses)

The problem is that you can still @#$&%^= buy them!

https://www.nxp.com/products/processors-and-microcontroll...

I'm disappointed, mostly because I worked on a PowerQUICC board back in 2000 and the fact that they are still selling the 68360 24 years later makes me very sad.

Linux-for-Rust or Rust-for-Linux

Posted Sep 2, 2024 7:25 UTC (Mon) by roc (subscriber, #30627) [Link] (2 responses)

Rust actually supports M68K so the real problem is if you could buy new Alpha chips. I don't think you've been able to do that for a long time.

Linux-for-Rust or Rust-for-Linux

Posted Sep 2, 2024 16:03 UTC (Mon) by Wol (subscriber, #4433) [Link] (1 responses)

As I understood it (I never used them) the M68K chips had a sane design, unlike the x86 ones. Maybe the reason you can still buy them is people value them for their simplicity and "easy to understand"ness - that can be worth a lot.

Cheers,
Wol

Linux-for-Rust or Rust-for-Linux

Posted Sep 2, 2024 19:12 UTC (Mon) by pbonzini (subscriber, #60935) [Link]

The current m68k chips (ColdFire) are a reduced and simplified version of the original instruction set. I doubt that a backwards compatible 680x0 with the addition of SIMD, 64-bit support, CFI, virtualization and whatnot (for example system-wide innovation such as multiprocessor and a fast superscalar microarchitecture) would be overall any more manageable than x86.

Linux-for-Rust or Rust-for-Linux

Posted Sep 1, 2024 11:40 UTC (Sun) by ralfj (subscriber, #172874) [Link]

> And hopefully gccrs will make these complaints go away for good.

Or, (in my view) more likely, rustc_codegen_gcc. :)

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 18:40 UTC (Sat) by josh (subscriber, #17465) [Link]

> I don't think, for example, that you can compile Rust for m68k or alpha at this point. I'd be happy to be told I'm mistaken.

https://doc.rust-lang.org/nightly/rustc/platform-support/...

It's still tier 3, but it exists.

As for alpha, if people still want to keep it alive, rustc_codegen_gcc will handle that eventually.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 21:09 UTC (Fri) by asahilina (subscriber, #166071) [Link] (14 responses)

> It is basically unusable except for unmerged toy projects and it is still not obvious when that will change.

I guess my Apple AGX GPU driver, which is the kernel side to the world's first and only OpenGL and Vulkan certified conformant driver for Apple Silicon GPUs, and also the FOSS community's first fully reverse engineered driver to achieve OpenGL 4.6 conformance, and which is used by thousands of Asahi Linux users in production, and that literally has never had a reported oops bug in production systems not caused by shared C code (unlike basically every other Linux GPU driver), is "an unmerged toy project".

Since you work for Nvidia, I'm sure you've heard of Nova, the up-and-coming Nouveau replacement driver that is also written in Rust using my Rust DRM abstractions. Is that also going to be "an unmeged toy project"?

This kind of demeaning of our work is why us Rust developers are getting very, very tired of the kernel community.

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 22:41 UTC (Fri) by jgg (subscriber, #55211) [Link] (13 responses)

You should be very proud of what AGX has accomplished, it is amazing software, and an incredible piece of work. In fact everyone I've talked to about it has shared that view.

However, that doesn't change today's facts - AGX is currently unmerged and serves a tiny and niche user base with no commercial relavance. That is an unmerged toy by my definition.

There is nothing wrong at all with working on toy software. Linux itself started out as a toy, this is not an attempt to be demeaning.

The point, as pbonzini elaborated on, is a lack of "killer use case" to motivate RH to seriously turn on kernel Rust in RHEL10. AGX will not alter RH's plans.

Nova is barely started, let's wait a few years to see what impact it has. I'm optimistic that a completed Nova would convince several distros to turn on kernel Rust support. I was actually thinking primarily about the Rust NVMe driver.

Unmerged toy

Posted Aug 30, 2024 22:48 UTC (Fri) by corbet (editor, #1) [Link] (3 responses)

Honestly, "unmerged toy" seems like an unnecessarily dismissive term for something like this. How about "out-of-tree useful driver" - a term that we could apply to things like fwctl as well, perhaps :)

The story of why it is unmerged is something I've never quite managed to dig into, but would like to.

Unmerged toy

Posted Aug 31, 2024 20:32 UTC (Sat) by jgg (subscriber, #55211) [Link] (2 responses)

That's for the feedback John. It is admittedly hard to know where colourful language stops being entertaining and people are offended. For instance down below someone was calling m68k/etc a "museum architecture" which is a phrase I've seen many times before. It seems like a popular and accurate term to describe it, yet I would also think that is dismissive to the consistent work Geert and others put in.

Unmerged toy

Posted Sep 2, 2024 7:34 UTC (Mon) by roc (subscriber, #30627) [Link] (1 responses)

Feel free to popularize a different term to describe such architectures. We do need a term for the situation where the effort of supporting an architecture (across the entire project, so including the costs of vetoing changes that would benefit other architectures) exceeds the practical benefits of being able to run the latest kernels on machines of that architecture.

Unmerged toy

Posted Sep 6, 2024 7:36 UTC (Fri) by da4089 (subscriber, #1195) [Link]

> Feel free to popularize a different term to describe such architectures.

"Heritage" is the usual respectful euphemism, I think?

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 22:49 UTC (Fri) by Ashton (guest, #158330) [Link]

Boy, the existing contributor base of Linux is not covering itself in glory today.

“Toy project”? Can you please try and not be so petty and rude?

Linux-for-Rust or Rust-for-Linux

Posted Aug 30, 2024 23:56 UTC (Fri) by airlied (subscriber, #9104) [Link] (2 responses)

Throwing out toy without defining what you mean(even with a definition, it's still not a great word choice), is not exactly a great look here.

Like do we consider the open-gpu-kernel driver from NVIDIA a toy because it isn't upstream?

Asahi is not a driver on the enterprise radar, it won't make RHEL sit up and notice but in does that make it a toy.

I count asahi as a very successful fact finding mission, that in the end is very hard to upstream in a reasonable manner. It's why nova is approaching this from the other end, and building a driver upstream, where we fix the interactions with other subsystems in order as we go, building the ecosystem upstream rather than having it done in a private fork.

The main current focus is driver model and Greg at the moment, next after that will probbaly be getting pci, platform and drm device bindings into shape, KMS modesetting (which Asahi didn't have to tackle), and then the actual nova project.

I've already written a rust implementation that talks to NVIDIA's GSP firmwares and encapsulate the unstable ABI in a similiar form to the Asahi work, and the advantages of this over a C project to do the same are immense. Like night and day difference in how much code had to be written.

I think Linus has said at last year maintainers summit that he was supportive of this, and he thinks it will happen, I think if people start acting as active roadblocks to work, rather than sideline commentators who we can ignore, then I will ask Linus to step in and remove roadblocks, but so far we haven't faced actual problems that education and patience can't solve.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 9:21 UTC (Sat) by Wol (subscriber, #4433) [Link] (1 responses)

> The main current focus is driver model and Greg at the moment, next after that will probbaly be getting pci, platform and drm device bindings into shape, KMS modesetting (which Asahi didn't have to tackle), and th

Given that is at least one (Christoff Hellwig) big developer (you make it sound like two - Greg KH) is throwing a lot of effort into cleaning up the kernel, sounds like they need to be brought on-side if they aren't already.

I'm picking up a lot about Rust-kernel bindings, and how the end result is much cleaner for both the C and Rust sides. So even if the resulting Rust work isn't merged, actually the effort spent creating the Rust interface would be a great help to both of them.

So you now have clean Rust interfaces for anybody who is interested ... and any "I'm not learning Rust" developers who tamper with those interfaces will get shouted at "don't you dare create any (C) bugs that this would have automatically caught!"

Cheers,
Wol

Two implementations - another benefit

Posted Sep 6, 2024 11:46 UTC (Fri) by jjs (guest, #10315) [Link]

Seen it in other projects. You get something that works. Someone else builds a clean implementation - and you start seeing the cleanup in the design/specs as you make the two compatible. Which, in the long term, helps both teams.

Not familiar with Rust, but it sounds like the efforts are having good benefits even without Rust being fully integrated.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 3:30 UTC (Sat) by asahilina (subscriber, #166071) [Link] (4 responses)

> The point, as pbonzini elaborated on, is a lack of "killer use case" to motivate RH to seriously turn on kernel Rust in RHEL10. AGX will not alter RH's plans.

You are aware that the AGX driver is in fact the reason why Fedora is turning on Rust support in upstream kernels, right? I'm pretty sure that is doing more to push RHEL to eventually do the same than anything else, today.

https://gitlab.com/cki-project/kernel-ark/-/merge_request...

Neal is part of the Fedora Asahi SIG.

(Won't comment on your insistence on the "toy" designation since other replies have already done so.)

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 7:38 UTC (Sat) by airlied (subscriber, #9104) [Link] (3 responses)

Red Hat will not turn on rust prior to nova if I had to guess, I'm not seeing any other motivator, I don't see any other motivator.

Don't confuse Fedora, CentOS or ARK with Red Hat here. RHEL is the boss level here, but I don't really care about that, I only care about getting things upstream lined up.

I think a lot of the complaints about rust will evolve away once there is an interesting in-tree consumer, toolchain versions will stabilise, better toolchain support for things upstream needs etc

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 11:11 UTC (Sat) by Conan_Kudo (subscriber, #103240) [Link] (2 responses)

No. Rust is getting turned on because drm_panic is written in Rust. I started working on this for enablement because of AGX, but we need it turned on ASAP because drm_panic is approved for Fedora Linux 42.

Nova is on literally nobody's radar right now because it doesn't exist beyond scaffold. There is no code that does anything yet, to the best of my knowledge, and there will not be any for a long while.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 12:10 UTC (Sat) by airlied (subscriber, #9104) [Link] (1 responses)

DRM panic in Fedora is not a Red Hat commitment to rust in RHEL. I think directly said not to confuse Fedora and Red Hat here.

Linux-for-Rust or Rust-for-Linux

Posted Aug 31, 2024 12:31 UTC (Sat) by Conan_Kudo (subscriber, #103240) [Link]

Based on the conversations I've had with the RHEL kernel team so far, the main blocker for RHEL is the lack of modversions support for Rust, which is being worked on. I do think it'll get enabled before Nova is in a useful state, because there are other little drivers in-tree where there are C and Rust versions and the Rust versions are better than the C versions.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds