Shrinking the kernel with a hammer
Shrinking the kernel with a hammer
Posted Mar 7, 2018 2:13 UTC (Wed) by abufrejoval (guest, #100159)Parent article: Shrinking the kernel with a hammer
I remember running Microport Unix on my 80286 (fully loaded with 640K of base RAM) with an Intel Above Board that added 1.5MB of RAM, I believe (could have been 2MB). It also gave me a free 8087 math co-processor as a gift and a set of disks labelled "Microsoft Windows 1.01".
Since I dual booted it with DOS and the mapping between expanded (paged in a 64K "BIOS area" window in real mode) and extended (above 1MB range, available only in protected mode) mapping of RAM was set via DIP switches, I allocated around 50% to each. It meant a little more than 1MB of RAM overall for Microport.
Full UNIX (TM) file system, full multi-user (via serial ports), very much like a PDP-11 in fact, where Unix was born. The 286 had a MMU but not at a page, but segment level, again pretty much like a PDP-11. Of course UNIX System V, Release 2 didn't have 400 system calls and the kernel was statically built and linked. I did some fiddling with the serial drivers to have them support the internal queues of the UARTs, that avoided having to interrupt after every character set or received. That's what made 115kbit possible. Also fiddled with Adaptec bus master SCSI drivers.
Ah and it ran a DOS box, long before OS/2 ever did, one single DOS task which run in the lower 640k by resetting the 286 CPU via the keyboard controller on timer interrupts. The BIOS would then know via a CMOS byte, that the computer had in fact not just been turned on, but come back from protected mode: A glorious hack made possible by IBM for the PC-AT, so it could perform a RAM test after boot on systems which had more than 640K of RAM installed.
For kicks I ran a CP/M emulator in the DOS box, while running a compile job on the Unix...
</old-memories>
<other-old-memories>
A couple of years later I had to port X11R4 to a Motorolla 68020 system that ran AX, a µ-kernel OS somewhere between QNX and Mach. Basically a fixed demo system, that ran a couple of X applications on a true color HDTV display using a TI TMS34020 TIGA board. Had to make X11R4 true-color capable, too: It was only 1 and 8 bit color depth at that point.
MMU wasn't enabled on 68020, there was no file system and the µ-kernel just gave me task scheduling. So I had to write a Unix emulator, basically a library that emulated all system calls required by X11. The "file system" was a memory mapped archive (uncompressed) that just got included into the BLOB along with everything else.
Used a GCC 1.31 cross compiler on a Sun SPARC Station. After several months of working through the X11 source code to make it true color and run some accellerated routines on the TIGA GPU (TMS 34020 was a full CPU that used bit instead of byte addressing for up to 16MB of RAM with 32-bit addresses!) it just worked perfectly at first launch! Without any debugging facilities I'd have been screwed if it didn't...
Dunno what the 68020 had for RAM, but I doubt it was more than 1 or 2MB.
</other-old-memories>
So where it all comes together is that the process you describe is somewhat similar to turning Linux plus a payload into a Unikernel or Library OS, where everything not needed by the payload app is removed from the image.
I sure wouldn't mind if Linux could support that out of the box, including randomization of the LTO phase for RoP protection. And yes, I believe the GPL is not a good match for that.
The Linux build process must be one of the most wasteful things you can do on a computer, starting with the endless parsing of source files which have motivated Google to Go.
I keep dreaming about an AI that can take the Linux source code and convert it into something that is a Unikernel/LibraryOS image, which is only incrementally compiled/merged where needed when you change some kernel or application code at run-time.
I believe I'd call it Multics.
