|
|
Subscribe / Log in / New account

Great!

Great!

Posted May 16, 2015 3:08 UTC (Sat) by wahern (subscriber, #37304)
In reply to: Great! by cesarb
Parent article: Rust 1.0 released

Blocking on memory failure and dipping into reserve pools are definitely options. I tend to think of them more as kludges. Blocking is non-deterministic. Dipping into the reserve pool is problematic if you don't have a fallback option--what if you miscalculated?

I prefer to fail fast as a discipline. That means I always try make sure that at any point of failure--memory, thread-contended resource, data source (file, socket), etc--my state is consistent. Focus on keeping your state consistent and transparent, and everything else follows from that.

Whether I _actually_ fail fast in the end, or leverage some other technique (blocking, reserve pool, cache flushing, etc) is more of a policy decision. But if you rely on those other techniques from the outset, I think you'll tend to end up with messy and complex code. For one thing, those measures are leaky--blocking, reserve pools, and cache flushing all tend to create cross-component dependencies, at both the interface level and at run-time. A data sink allocating a buffer now needs to know about some other random component's caching interface, or it needs to share or duplicate a reserve pool. And in the end it might all be for naught. It's seeing that kind of complexity, I think, where people get the idea that handling OOM is impractical.

Plus, much like the phenomenon of buffer bloat, trying to hide resource exhaustion will often only compound the problem at the macro scale. Decisions on how to handle exhaustion are better handled at the edges, not by the actors in the middle. They can decide to fail or retry. The bulk of the software in the middle should simply be concerned with facilitating such decisions.


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds