|
|
Subscribe / Log in / New account

Zig heading toward a self-hosting compiler

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 0:12 UTC (Fri) by khim (subscriber, #9252)
In reply to: Zig heading toward a self-hosting compiler by ofranja
Parent article: Zig heading toward a self-hosting compiler

>There are a number of crates with support for "no_std", and that's usually an assumption if you're doing embedded programming.

About 1% of total. So basically already-not-that-popular Rust suddenly becomes 100 times less capable for you.

>You could ask the same thing about C++ by the way, and the answer would be the same.

Yes and no. On one hand C++ is much, much bigger. So even with 1% of the codebase you still have more choice.

On the other hand C++20 made a total blunder: coroutines, feature which looks like a godsend to embedded… is built on top of dynamic allocation. Doh.

Sure, you can play some tricks, you can disassemble compiled code and look on if allocations were actually eliminated or not, you can look on how much stack is used… but at this point “simply pick another language” starts looking like a more viable approach long-term.

>Rust has "core" and "std"; "core" is fine since it does not allocate (it doesn't even know what a heap is), "std" is the high-level library.

Thanks for explanation. I knew that some crates are non-allocating. Was just not sure about how much actual ready-to-use code could you still use if you give up on “std”… and answer is about what I expected: 1% or so.

>The only thing that allocates is the std library - which is already optional, and will likely have support for fallible allocation in the near future.

You couldn't call something which is used by 99% of codebase “optional”. It just doesn't work. That's a mistake which D did (and which probably doomed it): GC was declared “optional” in that same sense — yet the majority of the codebase couldn't be used without it. This meant that you couldn't do some things in the language because GC is “optional” — yet you couldn't, practically speaking, go without it because then you would have to write everything from scratch. Thus you got worst sides of two worlds.

>No, the easier way is to do static memory allocation so you never OOM.

That's not always feasible. And embedded is not the whole world. I still hate the fact that I couldn't just open large file and see simple message “out of memory, sorry” instead of looking on frozen desktop which I'm forced to reset because otherwise my system would be unusable for hours (literally: I measured it — between 1 hour and about 2 hours before OOM-killer would finally wreak enough havoc for the system to react to Alt-Ctrl-Fx and switch to text console… which is of course no longer needed because some processes are killed and GUI is responsive again).

>If you have dynamic allocation you need to handle OOM, period. Arenas help with the cost but only push the problem to a different place.

Well… arenas make it feasible to do that… but that, by itself, doesn't, of course, mean that anyone would bother. That's true.

>As I said before: improving C is one thing but maybe not enough for justifying a new language; a paradigm shift, however, is something much more appealing.

Paradigm shift, my ass. How have we ended up in a world where “program shouldn't randomly die with no warnings” is considered “a paradigm shift”, I wonder?


to post comments

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 2:19 UTC (Fri) by ofranja (guest, #11084) [Link]

By paradigm shift I meant a safe-by-default language without GC.

I already addressed your points in my other comments and I don't feel like repeating myself - specially to someone being rude and sarcastic - so let's just agree to disagree.

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 13:20 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (2 responses)

> I still hate the fact that I couldn't just open large file and see simple message “out of memory, sorry” instead of looking on frozen desktop which I'm forced to reset because otherwise my system would be unusable for hours

There's some super-templated code in the codebase I work on regularly that eats 4G+ of memory per TU. I've learned to wrap this up in a `systemd-run --user` command which limits that command's memory. This way it is always on the chopping block first for using its allocated slot (instead of X, tmux, or Firefox all of which are way more intrusive to recover from). Of course, this doesn't help opening large files in existing editors, but I tend to open and close Vim instances all the time, so it'd be possible at least for my usage pattern.

Zig heading toward a self-hosting compiler

Posted Oct 10, 2020 16:52 UTC (Sat) by ofranja (guest, #11084) [Link] (1 responses)

> Of course, this doesn't help opening large files in existing editors [..]

Just to clarify the previous point of the discussion, neither a language that handles OOM would. It makes zero difference, actually: since malloc never fails on systems with overcommit enabled, there's no OOM to handle from the program's point of view.

There are a few solutions to the problem, the one you mentioned works but it depends on the program's behaviour and it can still lead to OOM if other programs need to use more memory than expected. The most general - without disabling overcommit - is to disable swap and set a limit on the minimum amount of cached data on the memory. When memory runs out the system will kill something instead of trash away, since there would be no pages left to allocate.

Zig heading toward a self-hosting compiler

Posted Oct 10, 2020 23:04 UTC (Sat) by khim (subscriber, #9252) [Link]

>When memory runs out the system will kill something instead of trash away, since there would be no pages left to allocate.

It's not useful. On my desktop with 192GB or RAM it takes between two and three hours before system finally returns. And quite often the whole thing becomes useless because some critical process of modern desktop becomes half-alive where it continues to run but doesn't respond to dbus requests.

You couldn't do that with today's desktop, period.

You can build series of kludges which would make you life tolerable (running compilation in cgroup is one way to prevent OOM situation for the whole system), but you couldn't do what you could with humble Turbo Pascal 7.0: open files till memory runs out, then close some when system complains.

You have to buy big enough system from handling all your needs and keep an eye on it not to be overloaded.

This works since today's systems are ridiculously overloaded compared to what Turbo Pascal 7.0 usually had… it's just looks a bit ridiculous…


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds