A prediction with no data to support it
A prediction with no data to support it
Posted Feb 28, 2025 17:23 UTC (Fri) by magnus (subscriber, #34778)In reply to: A prediction with no data to support it by NYKevin
Parent article: A change in maintenance for the kernel's DMA-mapping layer
Posted Mar 3, 2025 11:23 UTC (Mon)
by laarmen (subscriber, #63948)
[Link]
I'm pretty sure such situations will come up, but I don't see this as a problem. Again, the goal is not to avoid `unsafe` entirely. The entire point of `unsafe` is to have a way to communicate that there are invariants that cannot be upheld by the compilers that guarantee the safety of operations that are in general not safe.
Posted Mar 3, 2025 12:02 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (2 responses)
It does tend to be true in practice that you can encapsulate your unsafe blocks behind zero-overhead safe abstractions (e.g. having a lock type that is unsafe to construct because it depends on all CPUs running kernel code, not userspace), but that's an observation about code, not a requirement to benefit; even without that encapsulation, you benefit by reducing the scope of unsafe blocks so that it's easier to verify the remaining bits.
Posted Mar 4, 2025 3:50 UTC (Tue)
by magnus (subscriber, #34778)
[Link] (1 responses)
For example if the ownership or concurrency management of something central like a struct page or something depends on a lot of tangled state that the Rust compiler can not verify you can create abstractions that modify it but then you also require that the other conditions are satisfied. If they are called at the wrong time they would still compile but yield unsafe behavior.
On the other hand the hard distinction between safe/unsafe and logic bug may not make much sense deep in the kernel anyway as any logic bug would be both a safety and functional issue.
Posted Mar 4, 2025 10:23 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
And even in the kernel, the hard distinction makes a lot of sense; the point of unsafe is that the "unsafe superpowers" allow you to cause "spooky action at a distance" by breaking the rules of the abstract machine. The core purpose of the unsafe/safe distinction is to separate out these two classes of code, so that you can focus review efforts on ensuring that unsafe code doesn't break the rules of the machine, and hence that bugs can be found by local reasoning around the area of code with a bug.
The problem that Unsafe Rust and C both share is that a bug doesn't have to have local symptoms; for example, a bug in an in-kernel cipher used for MACsec can result in corruption of another process's VMA structures, causing the damaged process to crash for no apparent reason. That means that you have to understand the entire codebase to be certain of finding the cause of a bug; Safe Rust at least constrains the impact of a bug in the code to the things that code is supposed to be touching, so you can (e.g.) rule out all parts of the Safe Rust codebase that don't touch VMAs if what you're seeing is a corrupt VMA.
A prediction with no data to support it
The goal isn't to remove unsafe completely when you're writing a kernel; rather, you want to constrain it to small chunks of code that are easily verified by a human reader. For example, it's completely reasonable to require unsafe when you're changing paging related registers, since you're changing something underneath yourself that the compiler cannot check, and that can completely break all the safety promises Rust has verified.
Use of unsafe in kernel Rust code
Use of unsafe in kernel Rust code
You may have to weaken it as compared to #![forbid(unsafe_code)], but that's still a lot stronger than you get from plain C. Bubbling up unsafe to a high level is absolutely fine, though - it just tells callers that there's safety promises that Rust can't check, but relies on you checking manually instead.
Use of unsafe in kernel Rust code
