Huang: Rust: A Critical Retrospective
Huang: Rust: A Critical Retrospective
Posted May 27, 2022 21:04 UTC (Fri) by khim (subscriber, #9252)In reply to: Huang: Rust: A Critical Retrospective by NYKevin
Parent article: Huang: Rust: A Critical Retrospective
> Implementation inheritance is not a hack.
It is. If your superclass implements dozen of interfaces (not atypical for complex objects) and you don't explicitly document what methods call what other method, when and how, then you don't have any idea how changes to one interface in the subclass would affect other interfaces (if at all).
> It's entirely logically correct, provided that your code religiously complies with the Liskov substitution principle.Which is only possible if you read and understand the implementation of your superclass. Which throws away all these nice “encapsulation” and “open-close principles”. Which is then fixed, in turn, with copious amount of other hacks (“single-responsibility principle”, “dependency inversion principle” are band-aids for this problem, there are many others invented, too).
C# and Java tried to fix that mess with default implementations of methods in C# 8 and Java 8, but, of course, then can not do what Rust did and just declare implementation inheritance forbidden.
> If you *really* want provably correct code, you have to do formal verification, and there's a reason that nobody does that in practice.They do, they do. Google Wuffs is a pretty practical project. And Rust safety model was formally proven (even if safety of an actual Rust compiler wasn't).
I suspect that the only reason people are not doing it more is the fact that buggy, low-quality, barely working code was the historical norm and even code used for critical systems is often delivered with “no liabilities” licenses.
Situation is slowly changes, but we couldn't just, suddenly, add stiff penalties to the code: existing codebases would instantly become useless and we don't have anything better to replace them right now.
Posted Jun 2, 2022 22:54 UTC (Thu)
by mrugiero (guest, #153040)
[Link] (1 responses)
I think it will continue to be the norm because extra work is extra time to market. While I do not support cutting corners that way (not everywhere anyway), there's simply very strong commercial incentives to ship sorta-kinda-works code fast rather than correct code a few months later. It's more or less while software keeps getting more and more inefficient.
Posted Jun 10, 2022 20:36 UTC (Fri)
by matu3ba (guest, #143502)
[Link]
If you look at the code that is verified, then there are 2 approaches: 1. automatize a subclass of problems (Wuffs, Rust) or 2. formally verify the code (codegen or only logic with different performance and safety requirements etc).
> I think it will continue to be the norm because extra work is extra time to market.
Not really. Fundamental libraries make a small share of costs, but can play a huge risk on getting things wrong.
That said, product analysis has initial costs, maintenance costs and risks vs gains. The product you describe sound like low trusted brand things (throw-away products or component can be restarted on failure with low chance + cost on data loss).
Huang: Rust: A Critical Retrospective
>
> Situation is slowly changes, but we couldn't just, suddenly, add stiff penalties to the code: existing codebases would instantly become useless and we don't have anything better to replace them right now.
Huang: Rust: A Critical Retrospective
Though for Rust, it is more of a tradeoff than Wuffs.