The safety of unsafe code can depend on the correctness of safe code
The safety of unsafe code can depend on the correctness of safe code
Posted Feb 13, 2025 2:28 UTC (Thu) by intelfx (subscriber, #130118)In reply to: The safety of unsafe code can depend on the correctness of safe code by milesrout
Parent article: Maintainer opinions on Rust-for-Linux
That’s called “unsoundness” and it’s very much avoided.
Posted Feb 13, 2025 6:20 UTC (Thu)
by mb (subscriber, #50428)
[Link]
Unsafe code can depend on the behavior of safe code, but there are conventional boundaries to that.
But there are also exceptions to that, too. Let me give you and example: It may be reasonable for some unsafe code in a crate to assume that the std implementation of Vec doesn't randomize the indexes internally. If the unsafe code assumes that [5] will give access to the element number 5 of the Vec, then it is depending on the safe API of Vec to do the right thing. It would be possible to implement a memory-safe Vec that randomizes index accesses and never goes out of bound.
What would not be Ok for example: Assume that random joe's crate implements PartialOrd correctly for some type. It's commonly understood as being unsound, if unsafe code depends on PartialOrd being implemented incorrectly.
Another more common example is that your unsafe code may depend on other crate-internal safe code. For example some safe initialization code often is required for unsafe code to access initialized memory.
However, the starting point for the safety analysis *always* is the unsafe-block.
Posted Feb 13, 2025 10:48 UTC (Thu)
by excors (subscriber, #95769)
[Link]
In this case I think you could choose to put the bounds check inside the `unsafe` block so the `unsafe` block has no preconditions, in which case a safety analysis would only have to consider the block (and anything called from within the block) and not the whole function. But in general you probably want to minimise the scope of `unsafe` blocks, so the compiler won't let you accidentally call other `unsafe` functions where you didn't intend to - the same motivation as https://doc.rust-lang.org/edition-guide/rust-2024/unsafe-... - which will result in `unsafe` blocks with preconditions that non-`unsafe` code must uphold.
That Rustonomicon page also gives an example where an `unsafe` block depends on the entire module upholding its preconditions. If you can't avoid module-level preconditions, I think it's important to minimise the size of your module to keep the safety analysis manageable. Reviewers will need to be careful about any patches to non-`unsafe` code in that module, and in any modules that it calls, but at least you can still be confident that patches to any other modules can't break it.
The safety of unsafe code can depend on the correctness of safe code
For example that trust should not cross crates.
But it is reasonable to assume that the std library does the right thing instead.
The safety of unsafe code can depend on the correctness of safe code