|
|
Subscribe / Log in / New account

Zig heading toward a self-hosting compiler

Zig heading toward a self-hosting compiler

Posted Oct 10, 2020 15:16 UTC (Sat) by dvdeug (guest, #10998)
In reply to: Zig heading toward a self-hosting compiler by khim
Parent article: Zig heading toward a self-hosting compiler

I suspect it was when Windows arrived, because that's when the first serious multiprocess computing happened on PC and when the first complex OS interactions were happening. If I have several applications open when I hit the memory limit, what fails will be more or less at random; it's possible to be Photo Shop, or the music player or some random background program. It's also possible to be some lower-level OS code that had little option but to invoke the OOM killer or crash the system. It's quite possible you can't open a dialog box to tell the user of the problem without memory, nor save anything. As well as the fact your multithreaded code (and pretty much all GUI programs should run their interface on a separate thread) may be hitting this problem on multiple threads at once. What was once one program running on an OS simple enough to avoid memory allocation is now a complex collection of individually more complicated programs on a complex OS.


to post comments

Zig heading toward a self-hosting compiler

Posted Oct 10, 2020 22:50 UTC (Sat) by khim (subscriber, #9252) [Link] (2 responses)

>It's quite possible you can't open a dialog box to tell the user of the problem without memory,

MacOS classic solved that by setting aside some memory for that dialog box.

>nor save anything.

Again: not a problem on MacOS since there application requests memory upfront and then have to deal with it. Other app couldn't “steal” it.

>I suspect it was when Windows arrived

And made it impossible to reliably handle OOM, yes. Most likely.

>What was once one program running on an OS simple enough to avoid memory allocation is now a complex collection of individually more complicated programs on a complex OS.

More complex than typical zOS installation? Which handles OOM just fine?

I don't think so.

No, I think you are right: when Windows (the original one, not Windows NT 3.1 which properly handles OOM, too) and Unix (because of fork/exec model) made it impossible to reliably handle OOM conditions — people stopped caring.

SMP or general complexity had nothing to do with it. Just general Rise of Worse is Better.

As I've said: it's not impossible to handle and not even especially hard… but in a world where people just trained to accept the fact that programs may fail randomly for no apparent reason that thing is just entirely unnecessary.

Zig heading toward a self-hosting compiler

Posted Oct 11, 2020 4:25 UTC (Sun) by dvdeug (guest, #10998) [Link] (1 responses)

> Again: not a problem on MacOS since there application requests memory upfront and then have to deal with it.

You could do that anywhere. Go ahead and allocate all the memory you need upfront.

> More complex than typical zOS installation? Which handles OOM just fine?

If it does, it's because it keeps things in nice neat boxes and runs a closed set of IBM hardware, in the way that a desktop OS can't and doesn't. A kindergarten class at recess is more complex in some ways than a thousand military men marching in formation, because you never know when a kindergartner is going to punch another one or make a break for freedom.

> SMP or general complexity had nothing to do with it.

That's silly. If you're writing a game for a Nintendo or a Commodore 64, you know how much memory you have and you will be the only program running. MS-DOS was slightly more complicated, with TSRs, but not a whole lot. Things nowadays are complex; a message box calls into a windowing system and needs fonts loaded into memory and text shapers loaded; your original MacOS didn't handle Arabic or Hindi or anything beyond 8-bit charsets. Modern systems have any number of processes popping up and going away, and even if you're, say, a word processor, that web browser or PDF reader may be as important as you. Memory amounts will vary all over the place and memory usage will vary all over the place, and checking a function telling you how much memory you have left won't tell you anything particularly useful about what's going to be happening sixty seconds from now. What was once a tractable problem of telling how much memory is available is now completely unpredictable.

> Just general Rise of Worse is Better.

To quote that essay: "However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach." The simple fact is you're adding a lot of complexity to your system; there's a reason why so much code is written in memory-managed languages like Python, Go, Java, C# and friends. You're spending a lot of programmer time to solve a problem that rarely comes up and that you can't do much about when it does. (If it might be important, regularly autosave a recovery file; OOM is not the only or even most frequent reason your program or the system as a whole might die.)

> in a world where people just trained to accept the fact that programs may fail randomly for no apparent reason

How, exactly, does issuing a message box saying "ERROR: Computer jargon" going to help that? Because that's all most people are going to read. There is no way you can fix the problem that failing to open a new tab or file because the program is out of memory is going to be considered "failing randomly for no apparent reason" by most people.

I fully believe you could do better, but it's like BeOS; it was a great OS, but when it was made widely available in 1998, between Windows 98 and an OS that didn't run a browser that could deal with the Web as it was in 1998, people went with Windows 98. Worse-is-better in a nutshell.

Zig heading toward a self-hosting compiler

Posted Oct 11, 2020 19:49 UTC (Sun) by Wol (subscriber, #4433) [Link]

> "However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach."

Like another saying - "the wrong decision is better than no decision". Just making a decision NOW can be very important - if you don't pick a direction to run - any direction - when a bear is after you then you very quickly won't need to make any further decisions!

Cheers,
Wol

16-bit Windows applications tried to deal with OOM

Posted Oct 11, 2020 12:38 UTC (Sun) by quboid (subscriber, #54017) [Link] (3 responses)

Perhaps it was when 32-bit Windows arrived that applications stopped caring about running out of memory.

The 16-bit Windows SDK had a tool called STRESS.EXE which, among other things, could cause memory allocation failures in order to check that your program coped with them correctly.

16-bit Windows required large memory allocations (GlobalAlloc) to be locked when being used and unlocked when not so that Windows could move the memory around without an MMU. It was even possible to specify that allocated memory was discardable and you didn't know whether you'd still have the memory when you tried to lock it to use it again - this was great for caches and is a feature I wish my web browser had today. :-)

Mike.

16-bit Windows applications tried to deal with OOM

Posted Oct 11, 2020 21:14 UTC (Sun) by dtlin (subscriber, #36537) [Link]

Android has discardable memory - ashmem can be unpinned, and the system may purge it if under memory pressure. I think you can simulate this with madvise(MADV_FREE), but ashmem will tell you if it was purged or not and MADV_FREE won't (the pages will just be silently zero'ed).

16-bit Windows applications tried to deal with OOM

Posted Oct 11, 2020 22:28 UTC (Sun) by roc (subscriber, #30627) [Link] (1 responses)

You should be glad browsers don't have that today. If they did, people would use it, and browsers on developer machines would rarely discard memory, so when your machine discards memory applications would break.

16-bit Windows applications tried to deal with OOM

Posted Oct 15, 2020 16:19 UTC (Thu) by lysse (guest, #3190) [Link]

Better they break by themselves than freeze up the entire system while it tries to page every single executable VM page through a single 4K page of physical RAM, because the rest of it has been overcommitted to memory that just got written to.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds