|
|
Subscribe / Log in / New account

Zig heading toward a self-hosting compiler

Zig heading toward a self-hosting compiler

Posted Oct 7, 2020 20:04 UTC (Wed) by ballombe (subscriber, #9523)
In reply to: Zig heading toward a self-hosting compiler by Cyberax
Parent article: Zig heading toward a self-hosting compiler

You can disable overcommit, see /proc/sys/vm/overcommit_memory


to post comments

Zig heading toward a self-hosting compiler

Posted Oct 7, 2020 20:07 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

It doesn't actually disable it. You still will get killed by the OOM killer rather than get null from malloc(). In my experience to force malloc() on Linux to return NULL, you need to disable overcommit and try a really large allocation.

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 13:17 UTC (Fri) by zlynx (guest, #2285) [Link] (7 responses)

I don't think that you set it correctly then, because strict commit definitely works. I run my servers that way.

You have to read the documentation pretty carefully because there's actually three modes: 0 for heuristic, 1 for overcommit anything, and 2 is strict commit (well, strict depending on the overcommit_ratio value).

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 17:40 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

Strict commit works, sure. In the sense that the OOM killer will come out immediately, rather than later.

As I've shown, there's simply no way to get -ENOMEM out of sbrk() as an example.

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 18:25 UTC (Fri) by zlynx (guest, #2285) [Link] (5 responses)

And yet, it does do it somehow. I just wrote a little C program to test it, and tried it on my laptop and one of my servers.
#include <assert.h>
#include <errno.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

intptr_t arg_to_size(const char *arg) {
  assert(sizeof(intptr_t) == sizeof(long));

  errno = 0;
  char *endp;
  long result = strtol(arg, &endp, 0);
  if (errno) {
    perror("strtol");
    exit(EXIT_FAILURE);
  }
  if (*endp != '\0') {
    switch (*endp) {
    default:
      exit(EXIT_FAILURE);
      break;
    case 'k':
      result *= 1024;
      break;
    case 'm':
      result *= 1024 * 1024;
      break;
    case 'g':
      result *= 1024 * 1024 * 1024;
      break;
    }
  }
  return result;
}

int main(int argc, char *argv[]) {
  if (argc < 2)
    exit(EXIT_FAILURE);
  intptr_t inc = arg_to_size(argv[1]);
  if (inc < 0)
    exit(EXIT_FAILURE);

  printf("allocating 0x%lx bytes\n", (long)inc);
  void *prev = sbrk(inc);
  if (prev == (void *)(-1)) {
    perror("sbrk");
    exit(EXIT_FAILURE);
  }

  return EXIT_SUCCESS;
}
On a 32 GiB server with strict overcommit:
$ ./sbrk-large 24g
allocating 0x600000000 bytes

$ ./sbrk-large 28g
allocating 0x700000000 bytes
sbrk: Cannot allocate memory
Here are the interesting bits from the strace on the strict commit server for ./sbrk-large 32g. You can see sbrk is emulated by getting the current brk, adding the sbrk increment to it. Then it sees that brk did not move and returns an error code.
brk(NULL)                               = 0x1d71000
brk(0x801d71000)                        = 0x1d71000
And on the laptop after turning on full overcommit. Heuristic was failing on big numbers but with overcommit_memory set to 1 no problems.
./sbrk-large 64g
allocating 0x1000000000 bytes

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 18:28 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

Try allocating in small increments, instead of a huge allocation that blows past the VMA borders.

Zig heading toward a self-hosting compiler

Posted Oct 9, 2020 19:14 UTC (Fri) by zlynx (guest, #2285) [Link] (3 responses)

With sbrk it won't make any difference. It's a single contiguous memory block.

I'm not even writing into it. It's the writing that triggers OOM. The Linux OOM system is happy to let you have as much virtual memory as you want as long as you don't use it.

But as you can see when I exceed the amount of available RAM (free -g says there's 27g available) in a single allocation on the server with strict overcommit it fails immediately.

Zig heading toward a self-hosting compiler

Posted Oct 11, 2020 17:47 UTC (Sun) by epa (subscriber, #39769) [Link] (2 responses)

It's the writing that triggers OOM.
Isn't that exactly the point? If the memory isn't actually available, the allocation appears to succeed, but then blows up when you try to use it. There is not a way to say "please allocate some memory, and I do intend to use it, so if we're out of RAM tell me now (I'll cope), and if not, please stick to your promise that the memory exists and can be used".

It's good that a single massive allocation returns failure, but that does not come close to having a reliable failure mode in all cases.

Zig heading toward a self-hosting compiler

Posted Oct 11, 2020 18:29 UTC (Sun) by zlynx (guest, #2285) [Link] (1 responses)

With strict commit any allocation that succeeds is guaranteed to be available. You won't get the OOM handler killing anything when the memory is used. That's why I run my servers that way. Server applications tend to be built to handle memory allocation failures.

Unless it's Redis. You have to run Redis with full overcommit enabled.

Zig heading toward a self-hosting compiler

Posted Oct 18, 2020 15:06 UTC (Sun) by epa (subscriber, #39769) [Link]

Thanks, sorry I misunderstood your earlier comment.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds