It can still crash
It can still crash
Posted Feb 10, 2025 19:18 UTC (Mon) by mb (subscriber, #50428)In reply to: It can still crash by daroc
Parent article: Maintainer opinions on Rust-for-Linux
That way certain parts of the program can be guaranteed to not crash, because range checks have already been done at one earlier point and that assurance is passed downstream via the type system, for example.
There are endless such possibilities.
That has nothing to do with "unsafe", though.
But that needs a bit more complex example to show.
Posted Feb 11, 2025 17:18 UTC (Tue)
by acarno (subscriber, #123476)
[Link] (31 responses)
That said - the properties you encode in Ada aren't quite equivalent (to my understanding) as those in Rust. Ada's focus is more on mathematical correctness, Rust's focus is more on concurrency and memory correctness.
Posted Feb 11, 2025 18:28 UTC (Tue)
by mb (subscriber, #50428)
[Link] (30 responses)
For a simple example see https://doc.rust-lang.org/std/path/struct.Path.html
There are many much more complex examples in crates outside of the std library where certain operations cause objects to become objects of other types because the logical/mathematical properties change. This is possible due to the move-semantics consuming the original object, so that it's impossible to go on using it. And with the new object type it's impossible to do the "old things" of the previous type.
Posted Feb 11, 2025 21:37 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
Posted Feb 12, 2025 0:06 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (28 responses)
1. If some precondition holds, you either already have an instance of that type, or can easily get one.
To avoid repeating myself over and over again: Assume that all of these rules are qualified with "..., unless you use some kind of escape hatch, like a type cast, reflection, unsafe, etc."
(1) and (4) are doable in nearly every reasonable programming language. (2) is possible in most languages that have visibility specifiers like pub or private. (5) is impossible in most "managed" languages because they (usually) require objects to have some minimal runtime gunk for purposes such as garbage collection and dynamic dispatch, but systems languages should be able to do at least the zero-cost wrapper without any difficulty, and managed languages may have smarter optimizers than I'm giving them credit for.
(3) is the real sticking point for most languages. It is pretty hard to do (3) without some kind of substructural typing, and that is not something that appears in a lot of mainstream programming languages besides Rust.
Just to briefly explain what this means:
* A "normal" type may be used any number of times (in any given scope where a variable of that type exists). Most types, in most programming languages, are normal. Substructural typing refers to the situation where at least one type is not normal.
It can be argued that Rust's types are affine at the level of syntax, but ultimately desugar into linear types because of the drop glue (i.e. the code emitted to automatically drop any object that goes out of scope, as well as all of its constituent parts recursively). If there were an option to "opt out" of generating drop glue for a given type (and fail the compilation if the type is used in a way that would normally generate drop glue), then Rust would have true linear types, but there are a bunch of small details that need to be worked out before this can be done.
Posted Feb 12, 2025 20:02 UTC (Wed)
by khim (subscriber, #9252)
[Link] (10 responses)
Ughm… before it would be usable you wanted to say? Isn't that trivial? Like this The big question is: what to do about
Posted Feb 12, 2025 20:14 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (9 responses)
Posted Feb 12, 2025 20:30 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link]
Posted Feb 12, 2025 20:30 UTC (Wed)
by khim (subscriber, #9252)
[Link] (7 responses)
Isn't that how linear types would work in any other language, too? Precisely because of halting problem such systems have to reject certain programs that are, in fact, valid. That's what Rust does with references, too. And like usual you may solve the issue with If you would attempt to use this code you'll find out that biggest issue is not with this, purely theoretical, problem, but with much more practical issue: lots of crates assume that types are affine and so many code fragments that “should be fine” in reality are generating drops at failure cases. Not in sense “the compiler have misunderstood something and decided to materialize drop that's not needed there”, but in a sense “the only reason drop shouldn't be used at runtme here is because of properties that compiler doesn't even know about and couldn't verify”. Unwinding `panic!` is the most common offender, but there are many others. IOW: problem with linear types in Rust are not with finding better syntax for them, but with the decisions about what to do with millions of lines of code that's already written… and that's not really compatible with linear types.
Posted Feb 12, 2025 22:02 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (6 responses)
Posted Feb 12, 2025 22:18 UTC (Wed)
by khim (subscriber, #9252)
[Link] (5 responses)
But that's precisely what I'm talking about. If you would take this exact Linear types are easy to achieve in Rust… but then you need to rewrite more-or-less all I don't see how changes to the language may fix that. If you go with linking tricks then you can cheat a bit, but “real” linear types, if they would be added to the language, would face the exact same issue that we see here: compiler would need to prove that it doesn't need drop in exact same places and would face the exact same issues with that. IOW: changing compiler to make “real” linear types and changing the compiler to make If someone is interested in bringing them to Rust that taking that implementation that already exist and looking on the changes needed to support it would be much better than discussions about proper syntax.
Posted Feb 12, 2025 22:42 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (4 responses)
> IOW: changing compiler to make “real” linear types and changing the compiler to make static_assert-based linear types work are not two different kinds of work, but, in fact, exactly the same work.
Currently, the reference[1] says that this is allowed to fail at compile time:
if false {
I agree that, if a given type T is manifestly declared as linear, then the compiler does have to prove that <T as Drop>::drop is never invoked. But I tend to assume that no such guarantee will be provided for arbitrary T, because Rust compile times are already too slow as it is, so you will have to use whatever bespoke syntax they provide for doing that, or else it will continue to be just as brittle as it is now.
Speaking of bespoke syntax, the most "obvious" spelling would be impl !Drop for T{} (like Send and Sync). But if you impl !Drop, then type coherence means you can't impl Drop, and therefore can't put a const assert in its implementation. Maybe they will use a different spelling, to allow for (e.g.) drop-on-unwinding-panic to happen (as a pragmatic loophole to avoid breaking too much code), but then the const assert probably won't work either (because the compiler will not attempt to prove that the panic drop glue is never invoked).
[1]: https://doc.rust-lang.org/reference/expressions/block-exp...
Posted Feb 12, 2025 23:02 UTC (Wed)
by khim (subscriber, #9252)
[Link] (3 responses)
C++17 solved that with the if constexpr that have such guarantees. Why is it not a problem with C++, then? No. There are no need for that. Unlike traits resolution And yes, C++ have a rule that if something couldn't be instantiated then it's not a compile-time error if that instantiation is not needed. That included destructors. It was there since day one, that is, from year C++98 and while MSVC was notoriously bad at following that rule clang was always pretty precise. If some instantiations of that To the large degree it's chicken-end-egg issue: C++ have, essentially, build all it's advanced techniques around SFINAE and thus compilers learned to handle it well, in Rust very few developer even know or care that it's analogue exist in a language, thus it's not handled correctly in many cases. But no, it's not matter of complicated math or something that should slow down the compilation, on the contrary that's something that's relatively easy to implement: C++ is in existential proof. Yes, but, ironically enough, that would require a lot of work because if you do that then you are elevating the whole thing to the level of types, errors are detected pre-monomorphisation and now it's not longer an issue of implementing things carefully but it becomes a typesystem problem.
Posted Feb 13, 2025 0:14 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (2 responses)
It has to be in the type system in some form, or else generic code cannot coherently interact with it (unless we want to throw away literally all code that drops anything and start over with a system where drop glue is not a thing - but that's so obviously a non-starter that I cannot imagine you could be seriously proposing it). We do not want to recreate the C++ situation where all the types match, but then something blows up in monomorphization.
To be more concrete: Currently, in Rust, you can (mostly) figure out whether a given generic specialization is valid by reading its trait bounds. While it is possible to write a const block that imposes additional constraints, this is usually used for things like FFI and other cases where the type system doesn't "know" enough to stop us from doing something dangerous. Outside of those special cases (which typically make heavy use of unsafe), the general expectation is that if a type matches a given set of trait bounds, then I can use it as such.
This is useful for pedagogical reasons, but it is also just a heck of a lot more convenient. When I'm writing concrete code, I don't have to grep for std::static_assert or read compiler errors to figure out what methods I'm allowed to call. I can just read the trait bounds in the rustdoc and match them against my concrete types. When I'm writing generic code, I don't have to instantiate a bunch of specializations to try and shake out compiler errors one type at a time. The compiler will check the bounds for me and error out if I write something incoherent, even before I write any tests for my code.
But trait bounds are more than just a convenient linting service. They are a semver promise. If I have a function foo<T: Send>(t: T), I am not allowed to change it to foo<T: Send + Sync>(t: T) without breaking backwards compatibility. If you wrote foo::<RefCell>(cell) somewhere in your code, I have promised that that will continue to work in future releases, even if I never specifically thought about RefCell. Droppability completely breaks this premise. If droppability is only determined at monomorphization, then I can write a function bar<T>(t: T) -> Wrapper<T> (for some kind of Wrapper type) that does not drop its argument, and then later release a new version that has one unusual code path where the argument does get dropped (by one of its transitive dependencies, just to make the whole thing harder to troubleshoot). Under Rust as it currently exists, that is not a compatibility break, and it would be very bad if it was. We would have to audit every change to every generic function for new drop glue, or else risk breaking users of undroppable types. Nobody is actually going to do that, so we're simply not going to be semver compliant in this hypothetical.
Posted Feb 13, 2025 8:00 UTC (Thu)
by khim (subscriber, #9252)
[Link] (1 responses)
Well… it's like Greenspun's tenth rule. Rust didn't want to “recreate the C++ situation” and as a result it just made bad, non-functional copy. As you saw “something blows up in monomorphization” is already possible, only it's unreliable, had bad diagnosis and in all aspects worse that C++. Perfect is enemy of good and this story is great illustration IMNSHO. Yeah. A great/awful property which works nicely, most of the time, but falls to pieces when you really try to push. It's convenient till you need to write 566 methods instead of 12. At this point it becomes both a PITA and compilation times skyrocket. But when these bounds are not there from the beginning they are becoming more of “semver rejection”. That's why we still have no support for lending iterator in At some point you have to accept that your language couldn't give you perfect solution and can only give you good one. Rust developers haven't accepted that yet and thus we don't have solutions for many issues at all. And when choice is between “that's impossible” and “that's possible, but with caveats” practical people pick the latter option. Except, as you have already demonstrated, that's not true, this capability already exist in Rust – and “perfect solutions” don't exist… after 10 years of development. Maybe it time to accept the fact that “perfect solutions” are not always feasible. That's already a reality, Rust is already like this. It's time to just accept that.
Posted Feb 13, 2025 18:19 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link]
This is exactly the same attitude that everyone had towards borrow checking before Rust existed.
Sure, *eventually* you probably do have to give up and stop trying to Change The World - a language has to be finished at some point. But Rust is very obviously not there yet. It would be a shame if they gave up on having nice, well-behaved abstractions just because some of the theory is inconveniently complicated. Traits and bounds are improving, slowly, along with most of the rest of the language. For example, the standard library already makes some use of specialization[1][2], a sorely missing feature that is currently in the process of being stabilized.
Rust is not saying "that's impossible." They're saying "we want to take the time to try and do that right." I say, let them. Even if they fail, we can learn from that failure and adapt. But if you never try, you can never succeed.
[1]: https://rust-lang.github.io/rfcs/1210-impl-specialization.html
Posted Feb 12, 2025 20:28 UTC (Wed)
by plugwash (subscriber, #29694)
[Link] (16 responses)
There are two problems though.
1. A type can be forgotten without being dropped, known as "leaking" the value. There was a discussion in the run up to rust 1.0 about whether this should be considered unsafe, which ultimately came down on the side of no.
Posted Feb 12, 2025 21:55 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (15 responses)
A weakly-undroppable type may not be dropped. It may be leaked, forgotten, put into an Arc/Rc cycle, or smuggled inside of ManuallyDrop. It may also be dropped on an unwinding panic (with the understanding that panics are bad, unwinding is worse, and some features just won't play nicely with them). It does not provide a safety guarantee, so unsafe code must assume that undroppable types may still get lost by other means.
A strongly-undroppable type may not be dropped, and additionally provides a safety guarantee that its drop glue is never called. It cannot be dropped on unwinding panic, so such panics are converted into aborts if they would drop the object (you can still use catch_unwind to manually handle the situation, if you really want to). Unsafe code may assume that the object is never dropped, but still may not make any other assumptions about the object's ultimate fate.
An unleakable type may not be leaked or forgotten. It must always be dropped or destructured. You may not put it into Rc, Arc, std::mem::forget, ManuallyDrop, MaybeUninit (which is just ManuallyDrop in a funny hat), Box::leak, or anything else that could cause it to become leaked (unsafe code may do some or all of these things, but must not actually leak the object). You also may not define a static of that type, because statics don't get dropped at exit and are functionally equivalent to leaking an instance on startup.
A strongly-linear type is both strongly-undroppable and unleakable. It cannot be dropped or leaked by any means, and is subject to all of the constraints listed above. It may only be destructured by code that has visibility into all of its fields.
Now my commentary:
* Weakly-undroppable types are already very useful as a form of linting. For example, the std::fs::File type currently does not have a close() method, because the type already closes itself on drop. But that means that it swallows errors when it is closed. The documentation recommends calling sync_all() (which is equivalent to fsync(2)) if you care about errors, but I imagine that some filesystem developers would have choice words about doing that in lieu of checking the error code from close(2). If File were weakly-undroppable, then it could provide a close() method that returns errors (e.g. as Result<()>) and fails the compilation if you forget to call it. This isn't a safety issue since the program won't perform UB if you forget to close a file, so we don't need a strong guarantee that it is impossible to do so. We just want a really strong lint to stop the user from making a silly mistake. It would also help with certain problems involving async destructors, but I don't pretend to understand async nearly well enough to explain that. On the downside, it would interact poorly with most generic code, and you'd probably end up copying the semantics of ?Sized to avoid massive backcompat headaches (i.e. every generic type would be droppable by default, and would need to be qualified as ?Drop to allow undroppable types).
Of course, there is another problem: We cannot guarantee that an arbitrary Turing-complete program makes forward progress. If the program drops into an infinite loop, deadlock, etc., then no existing object will ever get cleaned up, meaning that everything is de facto leaked whether our types allow for it or not. To some extent, this is fine, because a program stuck in an infinite loop will never execute unsafe code that makes assumptions about how objects are cleaned up. To some extent, it is not fine, because we can write this function (assuming that ?Leak means "a type that can be unleakable"):
fn really_forget<T: Send + 'static + ?Leak>(t: T){
...or some variation thereof, and there is probably no general way to forbid such functions from existing. So any type that is Send + 'static (i.e. has no lifetime parameters and can be moved between threads) should be implicitly Leak.
The "obvious" approach is to make 'static imply Leak, and require all unleakable (and maybe also undroppable) types to have an associated lifetime parameter, which describes the lifetime in which they are required to be cleaned up. More pragmatically, you might instead say that 'static + !Leak is allowed as a matter of type coherence, but provides no useful guarantees beyond 'static alone, and unsafe code must have a lifetime bound if it wants to depend on something not leaking. I'm not entirely sure how feasible that is in practice, but it is probably more theoretically sound than just having !Leak imply no leaks by itself, and unsafe code probably does want to have a lifetime bound anyway (it provides a more concrete and specific guarantee than "no leaks," since it allows you to assert that object A is cleaned up no later than object B).
Posted Feb 12, 2025 22:33 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (5 responses)
Note that Rust could, with some effort, have a method fn close (self) -> io::Result<()>, without the weakly-undroppable property, so that developers who really care can get at the errors from closing a file. It'd be stronger if it'd fail the compilation if you forgot to call it, but it'd resolve the issue with those filesystem developers.
In practice, though, I'm struggling to think of a case where sync_all( also known as fsync(2)) is the wrong thing, and checking returns from close(2) is the right thing. The problem is that close returning no error is a rather nebulous state - there's not really any guarantees about what this means, beyond Linux telling you that the FD is definitely closed (albeit this is non-standard - the FD state is "unspecified" on error by POSIX) - whereas fsync at least guarantees that this file's data and its metadata are fully written to the permanent storage device.
Posted Feb 12, 2025 23:06 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (4 responses)
High-performance file I/O is an exercise in optimism. By not calling fsync, you accept some probability of silent data loss in exchange for higher performance. But there's an even more performant way of doing that: You can just skip all the calls to File::write(), and for that matter, skip opening the file altogether, and just throw away the data now.
Presumably, then, it is not enough to just maximize performance. We also want to lower the probability of data loss as much as possible, without compromising performance. Given that this is an optimization problem, we can imagine various different points along the tradeoff curve:
* Always call fsync. Zero probability of silent data loss (ignoring hardware failures and other things beyond our reasonable control), but slower.
Really, it's the middle one that makes no sense, since one main memory cache miss is hardly worth writing home about in terms of performance. Maybe if you're writing a ton of small files really quickly, but then errno will be in cache and the performance cost becomes entirely unremarkable.
Posted Feb 13, 2025 10:13 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (3 responses)
This is the core of our disagreement - as far as I can find, the probability of silent data loss on Linux is about the same whether or not you check the close return code, with the exception of NFS. Because you can't do anything with the FD after close, all FSes but NFS seem to only return EINTR (if a signal interrupted the call) or EBADF (you supplied a bad file descriptor), and in either case, the FD is closed. NFS is slightly different, because it can return the server error associated with a previous write call, but it still closes the FD, so there is no way to recover.
Posted Feb 13, 2025 17:54 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (2 responses)
Of course error-on-close is recoverable, you just delete the file and start over. Or if that doesn't work, report the error to the user so that they know their data has not been saved (and can take whatever action they deem appropriate, such as saving the data to a different filesystem, copying the data into the system clipboard and pasting it somewhere to be preserved by other means, etc.).
Posted Feb 13, 2025 18:12 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Until you can't start over ... which is probably par for the course in most data entry applications ...
Cheers,
Posted Feb 14, 2025 10:49 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Deleting the file is the worst possible thing to do with an error on close - two of the three are cases where the data has been saved, and it's an oddity of your code that resulted in the error being reported on close. The third is one where the file is on an NFS mount, the NFS server is set up to write data immediately upon receiving a command (rather than on fsync, since you won't get a delayed error for a write if the NFS server is itself caching) and you didn't fsync before close (required on NFS to guarantee that you get errors).
But even in the latter case, close is not enough to guarantee that you get a meaningful error that tells you that the data has not been saved - you need fsync, since the NFS server is permitted to return success to all writes and closes, and only error on fsync.
And just to be completely clear, I think this makes error on close useless, because all it means in most cases is either "your program has a bug" or "a signal happened at a funny moment". There's a rare edge case if you have a weird NFS setup where an error on close can mean "data lost", but if you're not in that edge case (which cannot be detected programmatically, since it depends on the NFS server's configuration), the two worst possible things you can do if there's an error on close are "delete the file (containing safe data) and start over" and "report to the user that you've saved their data, probably, so that they can take action just in case this is an edge case system.
On the other hand, fsync deterministically tells you either that the data is as safe as can reasonably be promised, or thatit's lost, and you should take action.
Posted Feb 13, 2025 13:24 UTC (Thu)
by daroc (editor, #160859)
[Link] (8 responses)
You are of course right in general; the price that Idris and Agda pay for being able to say that some programs terminate is that the termination checker is not perfect, and will sometimes disallow perfectly okay programs. So I don't think it's necessarily a good idea for Rust to add termination checking to its type system, but it is technically a possibility.
Posted Feb 13, 2025 15:13 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (7 responses)
Can't Rust have several checkers? If any one of them returns "okay", then the proof has passed and the code is okay. The programmer could then also add hints, maybe saying "run these checkers in this order", or "don't bother with these checkers", or whatever. So long as the rule is "any positive result from a checker is okay", that could reduce the checking time considerably.
Oh - and I don't know how many other languages do this sort of thing, but DataBASIC decrements -1 to 0 to find the end of an array :-) It started out as a "feature", and then people came to rely on it so it's standard documented behaviour.
(I remember surprising a C tutor by adding a bunch of bools together - again totally normal DataBASIC behaviour, but it works in C as well because I believe TRUE is defined as 1 in the standard?)
Cheers,
Posted Feb 13, 2025 15:34 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (5 responses)
If we could ensure that "undecided" was small enough, we'd not have a problem - but the problem we have is that all known termination checkers reject programs that humans believe terminate.
Posted Feb 13, 2025 18:01 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (4 responses)
It only takes one checker to return success, and we know that that Rust code is okay. So if we know (or suspect) which is the best checker to run, why can't we give Rust hints, to minimise the amount of checking Rust (has to) do. Which then means we can run more expensive checkers at less cost.
Cheers,
Posted Feb 13, 2025 18:31 UTC (Thu)
by daroc (editor, #160859)
[Link] (2 responses)
So the tradeoff will always be between not being able to check this property, or being able to check it but rejecting some programs that are probably fine.
That said, I do think there is a place for termination checking in some languages — the fact that Idris has it lets you do some really amazing things with dependent typing. Whether _Rust_ should accept that tradeoff is a matter of which things it will make harder and which things it will make easier, not just performance.
Posted Feb 14, 2025 15:12 UTC (Fri)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Feb 14, 2025 15:27 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Idris then uses this to determine whether it can evaluate a function at compile time (total functions) or whether it must defer to runtime (partial functions). This becomes important because Idris is dependently typed, so you can write a type that depends on the outcome of evaluating a function; if that function is total, then the type can be fully checked at compile time, while if it's partial, it cannot.
Posted Feb 14, 2025 11:02 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Posted Feb 13, 2025 16:06 UTC (Thu)
by adobriyan (subscriber, #30858)
[Link]
It works because "_Bool + _Bool" is upcasted to "int + int" first and then true's are implicitly upcasted to 1's.
Ada does this too
Ada does this too
It's quite common that the type system is used for logical correctness, too.
This type has nothing to do with safety, but it makes working with paths much less error prone than manually poking with strings. And it makes APIs better by clearly requiring a Path type instead of a random string.
If you do operations with such types the errors can't accidentally be ignored (and then lead to runtime crashes).
For example the compiler won't let you ignore the fact that paths can't always be converted to UTF-8 strings. It forces you to handle that logic error, unless you explicitly say in your code to crash the program if the string is not UTF-8.
Ada does this too
Ada does this too
2. You can't get an instance if the precondition fails to hold.
3. You can't invalidate the precondition while holding an instance (but you may be able to invalidate it and give up the instance in one operation). If you are allowed to obtain multiple instances that pertain to exactly the same precondition, then this rule is upheld with respect to all of those instances.
4. Functions can require you to have an instance of the type in order to call them.
5. The type is zero-size, or is a zero-cost wrapper around some other type (usually whatever object the precondition is all about), so that the compiler can completely remove it once your program has type-checked.
* An "affine" type may be used at most once (per instance of the type). In Rust, any type that does not implement Copy is affine when moved. Borrowing is not described by this formalism, but by happy coincidence, borrows simplify quite a few patterns that would otherwise require a fair amount of boilerplate to express.
*A "linear" type must be used exactly once. Linear types don't exist in Rust or most other languages, but Haskell is experimenting with them. Linear types enable a few typestate patterns that are difficult to express in terms of affine types (mostly of the form "invalidate some invariant, do some computation, then restore the invariant, and make sure we don't forget to restore it"). To some extent, this sort of limitation can be worked around with an API similar to Rust's std::thread::scope, but it would be annoying if you had to nest everything inside of closures all the time.
* "Ordered" types must be used exactly once each, in order of declaration. "Relevant" types must be used at least once each. These pretty much do not exist at all, at least as far as I can tell, but there is theory that explains how they would work if you wanted to implement them.
> there are a bunch of small details that need to be worked out before this can be done
Ada does this too
#[repr(transparent)]
struct Linear<T>(T);
impl<T> Drop for Linear<T> {
fn drop(&mut self) {
const { assert!(false) };
}
}
impl<T> Linear<T> {
fn unbox(self) -> T {
// SAFETY: type Linear is #[repr(transparent)]
let t: T = unsafe { transmute_copy(&self) };
_ = ManuallyDrop::new(self);
t
}
}
panic!
– not everyone likes to use -C panic=abort
. And without -C panic=abort
compiler would complain about any code that may potentially even touch panic!
Ada does this too
Ada does this too
> It does not promise that an unreachable const block is never evaluated (in the general case, that would require solving the halting problem).
Ada does this too
ManualDrop
and maybe some unsafe
.Ada does this too
Ada does this too
replace
and put it into your crate without telling the compiler that it needs to do some special dance then everything works.unsafe
code and standard library – which would, essentially, turn it into a different language.static_assert
-based linear types work are not two different kinds of work, but, in fact, exactly the same work.Ada does this too
// The panic may or may not occur when the program is built.
const { panic!(); }
}
> Currently, the reference[1] says that this is allowed to fail at compile time:
Ada does this too
const
evaluation happens after monomorphisation, not before. It's exactly like C++ templates and should work in similar fashion and require similar resources. Templates in C++ were handled decently by EDG 30 years ago and computers were much slower back then.static_assert
in linear type destructor is happening when they shouldn't happen it's matter to tightening the specifications and implementations, not question of doing lots of math and theorem-proving.Ada does this too
> We do not want to recreate the C++ situation where all the types match, but then something blows up in monomorphization.
Ada does this too
for
. And said lending iterator was a showcase of GATs four years ago.Ada does this too
[2]: https://doc.rust-lang.org/src/core/iter/traits/iterator.r...
Ada does this too
2. Values are dropped on panic, if the compiler can't prove your code won't panic then it will have to generate the drop glue.
Ada does this too
* I'm not sure that strongly-undroppable types provide much of a useful improvement over weakly-undroppable types, and it would be absurd to provide both features at once. But it could be argued that having one exceptional case where drop glue is invoked is a recipe for bugs, so it might be a cleaner implementation. OTOH, if you have chosen to have unwinding panics, you probably don't want them to magically transform into aborts just because some library in your project decided to use an undroppable somewhere. I currently think that weakly-undroppable is the more pragmatic choice, but I think there are valid arguments to the contrary.
* Unleakable types would allow the original API for std::thread::scope to be sound, and would probably enable some other specialized typestates. They're also pretty invasive, although probably not quite as much as undroppable types. They would not solve the File::close() problem.
* Strongly-linear types are just the combination of two of the above features. If either feature is too invasive to be practical, then so are strongly-linear types. But they would provide a strong guarantee that every instance is cleaned up in one and only one way.
std::thread::spawn(move || _t = t; loop{std::thread::park();});
}
Errors on close
For example, the std::fs::File type currently does not have a close() method, because the type already closes itself on drop. But that means that it swallows errors when it is closed. The documentation recommends calling sync_all() (which is equivalent to fsync(2)) if you care about errors, but I imagine that some filesystem developers would have choice words about doing that in lieu of checking the error code from close(2). If File were weakly-undroppable, then it could provide a close() method that returns errors (e.g. as Result<()>) and fails the compilation if you forget to call it
Errors on close
* Never call fsync and never check the close return code. Significant probability of silent data loss, but faster.
* Never call fsync, but do check the close return code. Presumably a lower probability of silent data loss, since you might catch some I/O errors, but almost as fast as not checking the error code (in the common case where there are no errors). In the worst case, this is a main memory load (cache miss) for the errno thread-local, followed by comparing with an immediate.
Errors on close
* Never call fsync and never check the close return code. Significant probability of silent data loss, but faster.
* Never call fsync, but do check the close return code. Presumably a lower probability of silent data loss, since you might catch some I/O errors, but almost as fast as not checking the error code (in the common case where there are no errors). In the worst case, this is a main memory load (cache miss) for the errno thread-local, followed by comparing with an immediate.
Errors on close
Errors on close
Wol
But, by the nature of Linux's close syscall, error on close means one of three things:
Errors on close
Ada does this too
Ada does this too
Wol
It's not the runtime of the checker that's the problem; it's that we cannot write a checker that definitely accepts or rejects all reasonable programs. The underlying problem is that, thanks to Rice's Theorem (a generalisation of Turing's Halting Problem), a checker or combination of checkers can, at best, give you one of "undecided", "property proven to hold", or "property proven to not hold". There's two get-outs we use to make this tractable:
Running several checkers instead of one
Running several checkers instead of one
Wol
Running several checkers instead of one
Running several checkers instead of one
Idris has both as part of its totality checker; a function can be partial (in which case it may never terminate or produce a value - it can crash or loop forever) or total (in which case it must either terminate for all possible inputs, or produce a prefix of a possibly infinite result for all possible inputs).
Uses of termination checking
If you run all the currently known termination checker algorithms that actually come up with useful results (with the exception of "run until a timeout is hit, say it might not terminate if the timeout is hit, or it does terminate if the timeout is not hit"), you're looking at a few seconds at most. The pain is not the time that the algorithms we know of take, it's the fact that most of them return "undecided" on programs that humans can tell will terminate.
Running several checkers instead of one
Ada does this too