|
|
Subscribe / Log in / New account

A prediction with no data to support it

A prediction with no data to support it

Posted Feb 28, 2025 17:23 UTC (Fri) by magnus (subscriber, #34778)
In reply to: A prediction with no data to support it by NYKevin
Parent article: A change in maintenance for the kernel's DMA-mapping layer

Thanks for the answer and sorry for harping on but I just wonder if there are situations in the kernel that can not be nicely contained in Rust abstractions with unsafe code and safe before/after points. Like situations where you can safely act on shared data without locking because you know that the overall state of the system is such that you cant have races but you cant describe it with Rust features. Maybe in scheduler code you might know that all the cpus got interrupted and you know what code the others are executing (as they are all running the same kernel code).


to post comments

A prediction with no data to support it

Posted Mar 3, 2025 11:23 UTC (Mon) by laarmen (subscriber, #63948) [Link]

> but I just wonder if there are situations in the kernel that can not be nicely contained in Rust abstractions with unsafe code and safe before/after points.

I'm pretty sure such situations will come up, but I don't see this as a problem. Again, the goal is not to avoid `unsafe` entirely. The entire point of `unsafe` is to have a way to communicate that there are invariants that cannot be upheld by the compilers that guarantee the safety of operations that are in general not safe.

Use of unsafe in kernel Rust code

Posted Mar 3, 2025 12:02 UTC (Mon) by farnz (subscriber, #17727) [Link] (2 responses)

The goal isn't to remove unsafe completely when you're writing a kernel; rather, you want to constrain it to small chunks of code that are easily verified by a human reader. For example, it's completely reasonable to require unsafe when you're changing paging related registers, since you're changing something underneath yourself that the compiler cannot check, and that can completely break all the safety promises Rust has verified.

It does tend to be true in practice that you can encapsulate your unsafe blocks behind zero-overhead safe abstractions (e.g. having a lock type that is unsafe to construct because it depends on all CPUs running kernel code, not userspace), but that's an observation about code, not a requirement to benefit; even without that encapsulation, you benefit by reducing the scope of unsafe blocks so that it's easier to verify the remaining bits.

Use of unsafe in kernel Rust code

Posted Mar 4, 2025 3:50 UTC (Tue) by magnus (subscriber, #34778) [Link] (1 responses)

The point i am trying to make is that you may not always be able to completely encapsulate the unsafe code into safe abstractions but have to propagate it upward in some way, either by going up the call stack and making all users of the abstraction unsafe or by lying to the compiler that your abstraction is safe but then requiring the additional constraints to be managed by the callers outside of the type system. So you may be forced to weaken the idea of "safe by construction" a little bit.

For example if the ownership or concurrency management of something central like a struct page or something depends on a lot of tangled state that the Rust compiler can not verify you can create abstractions that modify it but then you also require that the other conditions are satisfied. If they are called at the wrong time they would still compile but yield unsafe behavior.

On the other hand the hard distinction between safe/unsafe and logic bug may not make much sense deep in the kernel anyway as any logic bug would be both a safety and functional issue.

Use of unsafe in kernel Rust code

Posted Mar 4, 2025 10:23 UTC (Tue) by farnz (subscriber, #17727) [Link]

You may have to weaken it as compared to #![forbid(unsafe_code)], but that's still a lot stronger than you get from plain C. Bubbling up unsafe to a high level is absolutely fine, though - it just tells callers that there's safety promises that Rust can't check, but relies on you checking manually instead.

And even in the kernel, the hard distinction makes a lot of sense; the point of unsafe is that the "unsafe superpowers" allow you to cause "spooky action at a distance" by breaking the rules of the abstract machine. The core purpose of the unsafe/safe distinction is to separate out these two classes of code, so that you can focus review efforts on ensuring that unsafe code doesn't break the rules of the machine, and hence that bugs can be found by local reasoning around the area of code with a bug.

The problem that Unsafe Rust and C both share is that a bug doesn't have to have local symptoms; for example, a bug in an in-kernel cipher used for MACsec can result in corruption of another process's VMA structures, causing the damaged process to crash for no apparent reason. That means that you have to understand the entire codebase to be certain of finding the cause of a bug; Safe Rust at least constrains the impact of a bug in the code to the things that code is supposed to be touching, so you can (e.g.) rule out all parts of the Safe Rust codebase that don't touch VMAs if what you're seeing is a corrupt VMA.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds