LWN: Comments on "Multiple kernels on a single system" https://lwn.net/Articles/1038847/ This is a special feed containing comments posted to the individual LWN article titled "Multiple kernels on a single system". en-us Thu, 16 Oct 2025 11:44:28 +0000 Thu, 16 Oct 2025 11:44:28 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Not sure about this ... https://lwn.net/Articles/1039694/ https://lwn.net/Articles/1039694/ jsakkine <div class="FormattedComment"> Looking at how branching, inline comments and even the cover letter have laid out, my initial guess would be that this is generated. It's just does not "feel" how anyone would write their changes.<br> <p> I'd like to point out that personally my viewpoint does not come from any opionated standing point. Using any form of code generation to get some stuff ongoing is absolutely fine, as far as I'm concerned. The thing is, however, that it is exactly *placeholder code*, and for my eyes the patch set looks like placeholder/stub code as a feature.<br> <p> Obviously I don't know facts, this is exactly a guess, and I absolutely don't enjoy making claims like this, and I hope that I have completely misunderstood the topic.<br> <p> <p> <p> </div> Fri, 26 Sep 2025 12:39:20 +0000 Lots of use cases - Rolling kernel upgrade https://lwn.net/Articles/1039469/ https://lwn.net/Articles/1039469/ Karellen <div class="FormattedComment"> I'm not sure having multiple kernels accessing the same block device/filesystem is going to work very well. If it were going to work at all you'd probably need a separate virtual filesystem driver for the new kernel, which talks to a server on the old, and then once the old kernel is ready to unmount the filesystem do a switcheroo-handover type thing. The new kernel would have to swap out the virtual filesystem driver for the real ext4/btrfs/... and start accessing the "real" inodes directly?<br> </div> Thu, 25 Sep 2025 12:55:53 +0000 Some precendent for this in VMware's ESX kernel (version 5.0 and earlier) https://lwn.net/Articles/1039161/ https://lwn.net/Articles/1039161/ acarno <div class="FormattedComment"> Virginia Tech's SSRG has an on-going project similar to Barrelfish called Popcorn Linux (a fun joke about multiple "kernels" ;)<br> <p> In addition to running natively (e.g., multiple kernels on a single multi-core system), they also investigated running across different architectures and performing stack transformation to migrate memory between nodes.<br> <p> <span class="QuotedText">&gt; The project is exploring a replicated-kernel OS model for the Linux operating system. In this model, multiple Linux kernel instances running on multiple nodes collaborate each other to provide applications with a single-image operating system over the nodes. The kernels transparently provide a consistent memory view across the machine boundary, so threads in a process can be spread across the nodes without an explicit declaration of memory regions to share nor accessing through a custom memory APIs. The nodes are connected through a modern low-latency interconnect, and each of them might be based on different ISA and/or hardware configuration. In this way, Popcorn Linux utilizes the ISA-affinity in applications and scale out the system performance beyond a single system performance while retaining full POSIX compatibility.</span><br> <p> Project Website: <a href="https://popcornlinux.org/">https://popcornlinux.org/</a><br> 2020 LWN Article: <a href="https://lwn.net/Articles/819237/">https://lwn.net/Articles/819237/</a><br> </div> Tue, 23 Sep 2025 19:31:02 +0000 Firmware "kernels" https://lwn.net/Articles/1039125/ https://lwn.net/Articles/1039125/ linusw <div class="FormattedComment"> Yes they call it AMP "Asynchronous Multi-Processing" and there are attempts such as OpenAMP to standardize around using rpmsg for communication across these.<br> </div> Tue, 23 Sep 2025 15:04:40 +0000 Lots of use cases https://lwn.net/Articles/1039081/ https://lwn.net/Articles/1039081/ Wol <div class="FormattedComment"> And then the non-multi-kernel-aware kernel trips over a bug, tries to do something which would normally crash, and there just happens to be something real there that it accidentally trashes ...<br> <p> Cheers,<br> Wol<br> </div> Mon, 22 Sep 2025 22:13:11 +0000 interesting similarities to "hardware partitioning" of IBM mainframes https://lwn.net/Articles/1039080/ https://lwn.net/Articles/1039080/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; There is a gazillion different potential reasons for that: the solution was in search of a problem, it was too expensive, it was not mature yet, it broke backwards compatibility too much, it was mature and successful for a while but displaced by less convenient but much cheaper commodity solutions, etc.</span><br> <p> It wasn't interesting to Universities? (So students never knew about it.)<br> <p> Cheers,<br> Wol<br> </div> Mon, 22 Sep 2025 22:10:31 +0000 Firmware "kernels" https://lwn.net/Articles/1039067/ https://lwn.net/Articles/1039067/ marcH <div class="FormattedComment"> <span class="QuotedText">&gt; The other significant piece is a new inter-kernel communication mechanism, based on inter-processor interrupts, that allows the kernels running on different CPUs to talk to each other. Shared memory areas are set aside for the efficient movement of data between the kernels.</span><br> <p> Maybe this could become a standard for communicating with firmwares too, so drivers don't have to keep re-inventing this wheel?<br> <p> It's quite different because it's heterogeneous (both at the HW and SW levels) but most systems are _already_ "multi-kernels" when you think about it!<br> <p> </div> Mon, 22 Sep 2025 17:55:58 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038994/ https://lwn.net/Articles/1038994/ paulj <div class="FormattedComment"> <span class="QuotedText">&gt; Like I said, I'd consider this cool thing a *kind* of virtualization, one that trades flexibility for performance, not something *distinct* from virtualization.</span><br> <p> Similar stuff before has been called "Logical Partitions" (LPARs) by IBM, and "Logical Domains" (LDOMs) by Sun Microsystems (the sun4v stuff introduced in UltraSPARC T1 Niagara).<br> </div> Mon, 22 Sep 2025 10:05:30 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038988/ https://lwn.net/Articles/1038988/ farnz It's certainly an interesting turn of the wheel; one of the selling points of NUMA over clusters back in the 1990s was that a cluster required you to work out what needed to be communicated between partitions of your problem, and pass messages, while a NUMA cluster let any CPU read any data anywhere in the system. <p>NUMA systems could thus be treated as just a special case of clusters (instead of running an instance per system, passing messages over the network, run an instance per NUMA node, bound to the node, passing messages over shared memory channels), but they benefited hugely for problems where you'd normally stick to your instance's data, but could need to get at data from anywhere to solve the problem, since that was now just "normal" reads instead of message passing. <p>I'd be interested to see what the final intent behind this work is - is it better RAS (since you can upgrade the kernel NUMA node by NUMA node), is it about sharing a big machine among smaller users (like containers or virtualization, but with different costs), or is it about giving people an incentive to write their programs in terms of "one instance per NUMA node" again? Mon, 22 Sep 2025 09:46:46 +0000 Lots of use cases - Rolling kernel upgrade https://lwn.net/Articles/1038990/ https://lwn.net/Articles/1038990/ rhbvkleef <div class="FormattedComment"> I think another enticing use-case would be a kind of "rolling" kernel upgrade where we can start a newer kernel on a subset of cores, and migrate our userspace over to it gradually, before killing the old kernel.<br> </div> Mon, 22 Sep 2025 08:10:14 +0000 Shared memory https://lwn.net/Articles/1038989/ https://lwn.net/Articles/1038989/ matthias <div class="FormattedComment"> With this system you could upgrade the host kernel of a VM system by bringing up the new kernel and then migrating the VMs to the new kernel with zero copy operations. You only need to change the memory mappings of the old and new kernel. This could be way more efficient than migrating all VMs to a backup machine and migrating them back afterwards.<br> </div> Mon, 22 Sep 2025 08:09:48 +0000 Lots of use cases https://lwn.net/Articles/1038984/ https://lwn.net/Articles/1038984/ skissane <div class="FormattedComment"> <span class="QuotedText">&gt; However, in this case the underlying system is the hardware, that doesn't know anything about these partitions. A non-multikernel-aware kernel would discover all the memory and all the devices, and think that it owns everything.</span><br> <p> Maybe someone just needs to add a “telling lies facility” to the hardware/firmware which the multikernel could use to get the hardware/firmware to lie to the non-multikernel-aware kernel? This could be much more lightweight than standard virtualisation since it wouldn’t be involved at runtime only in config discovery<br> </div> Mon, 22 Sep 2025 04:50:14 +0000 Shared memory https://lwn.net/Articles/1038982/ https://lwn.net/Articles/1038982/ quotemstr <div class="FormattedComment"> <span class="QuotedText">&gt; I can see how this could be interesting for some kind of fault tolerance and perhaps especially zero downtime kernel updates. A message passing model is neat and clean.</span><br> <p> You can do this with VMs today. Why would you use this new thing instead of the nice mature virtualization stack?<br> </div> Mon, 22 Sep 2025 03:28:58 +0000 Shared memory https://lwn.net/Articles/1038978/ https://lwn.net/Articles/1038978/ SLi <div class="FormattedComment"> I can see how this could be interesting for some kind of fault tolerance and perhaps especially zero downtime kernel updates. A message passing model is neat and clean.<br> <p> Having said that, I started to wonder. Would it still be possible, and would it make enough sense, to have some kind of a shared memory mechanism between userspace processes running on the different kernels? I don't think it can look like POSIX, but something stripped down.<br> <p> What I'm basically thinking of: Multikernel gives us some benefits while arguably sacrificing other things as less important. Could we meaningfully claw back some of those lost things where it makes sense?<br> </div> Mon, 22 Sep 2025 01:10:48 +0000 interesting similarities to "hardware partitioning" of IBM mainframes https://lwn.net/Articles/1038977/ https://lwn.net/Articles/1038977/ marcH <div class="FormattedComment"> <span class="QuotedText">&gt; Anyway, this new multi-kernel work could be used in many different and useful ways, as others have already noted, but it's always interesting to see how essentially every "new" idea has antecedents in the past.</span><br> <p> There is a gazillion different potential reasons for that: the solution was in search of a problem, it was too expensive, it was not mature yet, it broke backwards compatibility too much, it was mature and successful for a while but displaced by less convenient but much cheaper commodity solutions, etc.<br> <p> 1% inspiration, 99% perspiration. The lone inventor and its eureka! moment is probably the least common case but it makes the best stories to read or watch and they massively skew our perception. Our tribal brain is hardwired for silver bullets and miracles and "allergic" to slow, global and real-world evolutions. Not just for science and technology, it's the same for economics, war, sociology, climate, etc.<br> <p> <p> </div> Sun, 21 Sep 2025 23:13:05 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038969/ https://lwn.net/Articles/1038969/ willy <div class="FormattedComment"> ... no.<br> <p> The patches are to do this automatically without library involvement. I think the latest round were called something awful like "Copy On NUMA".<br> </div> Sun, 21 Sep 2025 20:39:05 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038968/ https://lwn.net/Articles/1038968/ quotemstr <div class="FormattedComment"> Because the libraries have to have something to talk to? It's like asking why we add KVM syscalls when we have kvm command line. Separate jobs.<br> </div> Sun, 21 Sep 2025 20:35:46 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038964/ https://lwn.net/Articles/1038964/ willy <div class="FormattedComment"> If those libraries already exist, why do people keep submitting patches to add this functionality to the kernel?<br> </div> Sun, 21 Sep 2025 20:19:36 +0000 Limited isolation https://lwn.net/Articles/1038962/ https://lwn.net/Articles/1038962/ Lionel_Debroux <div class="FormattedComment"> Suitably configured grsec kernels typically use per-CPU PGDs for security reasons (IIRC, to avoid some race conditions), so I wonder how that would mix with a mainline kernel which doesn't.<br> </div> Sun, 21 Sep 2025 20:10:55 +0000 Lots of use cases https://lwn.net/Articles/1038959/ https://lwn.net/Articles/1038959/ glettieri <div class="FormattedComment"> <span class="QuotedText">&gt; It is already the case that a booting kernel asks the underlying system which part of physical memory it is allowed to use</span><br> <p> However, in this case the underlying system is the hardware, that doesn't know anything about these partitions. A non-multikernel-aware kernel would discover all the memory and all the devices, and think that it owns everything.<br> </div> Sun, 21 Sep 2025 17:46:56 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038948/ https://lwn.net/Articles/1038948/ ballombe <div class="FormattedComment"> This is correct. However, NUMA systems come with libraries to give you access to the physical layout so you can copy the working set only once per coherent NUMA blocks, which are much larger than 16 cores nowadays.<br> </div> Sun, 21 Sep 2025 12:42:18 +0000 Lots of use cases https://lwn.net/Articles/1038949/ https://lwn.net/Articles/1038949/ kleptog <div class="FormattedComment"> <span class="QuotedText">&gt; That second "foreign" kernel would need to understand the "partition" it is allowed to use so it won't try to take over rest of the machine where another kernel may be running</span><br> <p> It is already the case that a booting kernel asks the underlying system which part of physical memory it is allowed to use. It can then prepare the kernel mapping so it can only access the parts it is allowed to. It can't assume anything about all the other parts.<br> <p> Now, this only prevents accidental interference. There's nothing that prevents the kernel from modifying its mapping (dynamically adding RAM/devices is a thing) but it would give a very high degree of isolation. Not as good as a hypervisor, but pretty good.<br> </div> Sun, 21 Sep 2025 12:16:53 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038947/ https://lwn.net/Articles/1038947/ quotemstr <div class="FormattedComment"> <span class="QuotedText">&gt; Virtualization is an abstraction of the hardware.</span><br> <p> So VMMs doing PCIe pass-through aren't doing virtualization? <br> <p> Anyway, the terminology difference is immaterial. In a purest view of virtualization, a guest shouldn't be aware that it's virtualized or that other guests exist. In the purest view of a partition, the whole system is built around multi-instance data structures. In reality, the virtualization is <br> leaky, and deliberately so because the leaks are useful. Likewise, in a partition setup, especially one grafted into an existing system, at some point you arrange data structures such that code running on one partition "thinks" it owns a system --- there's your abstraction.<br> <p> Besides: lots of people arrange VMs and assign resources such that the net effect ends being a partition anyway. The multi kernel work might be a way to achieve practically the same configuration with more performance and less isolation.<br> <p> My point is that it would be nice to manage configurations like this using the existing suite of virtualization tools. Even if multi kernel is not virtualization under some purist definition of the word, it's close enough, practically speaking, that virtualization tools can be made to work well enough that the configuration stacks can be unified and people don't have to learn a new thing.<br> </div> Sun, 21 Sep 2025 11:40:25 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038946/ https://lwn.net/Articles/1038946/ kazer <div class="FormattedComment"> <span class="QuotedText">&gt; Like I said, I'd consider this cool thing a *kind* of virtualization</span><br> <p> Virtualization is an abstraction of the hardware.<br> <p> Better term for multi-kernel system would be *partition* (term has been used in mainframe-world already). In a multi-kernel design, kernel would still see the whole hardware as it is (not an abstraction), but it would be limited to a subset of the capabilities (a partition).<br> <p> Linux already has various capabilities to limit certain tasks to run on certain CPUs so this would be taking that approach further, not adding abstractions.<br> <p> </div> Sun, 21 Sep 2025 10:56:55 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038945/ https://lwn.net/Articles/1038945/ willy <div class="FormattedComment"> Well, there's two schools of thought on that. Some say that NUMA hops are so slow and potentially congested (and therefore have high variability in their latency) that it's worth replicating read-only parts of the working set across nodes. They even have numbers that prove their point. I haven't dug into it enough to know if I believe that these numbers are typical or if they've chosen a particularly skewed example.<br> </div> Sun, 21 Sep 2025 10:15:01 +0000 Limited isolation https://lwn.net/Articles/1038944/ https://lwn.net/Articles/1038944/ cyperpunks <div class="FormattedComment"> Would be possible to mix vanilla kernel and GRSecurity kernel at the same system? Such thing indeed be very useful imho.<br> </div> Sun, 21 Sep 2025 09:23:46 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038943/ https://lwn.net/Articles/1038943/ ballombe <div class="FormattedComment"> ...or you can run a SSI OS that move the complexity to the OS where it belongs.<br> &lt;<a href="https://en.wikipedia.org/wiki/Single_system_image">https://en.wikipedia.org/wiki/Single_system_image</a>&gt;<br> ... or HPE will sell you NUMAlink systems with coherent memory across 32 sockets.<br> <p> But more seriously, when using message passing, you still want to be share your working set across all cores in the same node to preserve memory.<br> Replacing a 128 cores system by 8 16-cores system will require 8 copies of the working set.<br> <p> </div> Sun, 21 Sep 2025 09:17:00 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038942/ https://lwn.net/Articles/1038942/ quotemstr <div class="FormattedComment"> <span class="QuotedText">&gt; There are plenty of programs that work perfectly well with (e.g.) 200 threads on 200 cores, on hardware that exists today. Asking people to rewrite them to introduce a message-passing layer to get them to scale on your hypothetical cluster is a non-starter. Definitely a bug, not a feature.</span><br> <p> Yes, and those programs can keep running. Suppose I'm developing a brand-new system and a cluster on which to run it. My workload is bigger than any single machine no matter how beefy, so I'm going to have to distribute it *anyway*, with all the concomitant complexity. If I can carve up my cluster such that each NUMA domain is a "machine", I can reuse my inter-box work distribution stuff for intra-box distribution too. <br> <p> Not every workload is like this, but some are, and life can be simpler this way.<br> </div> Sun, 21 Sep 2025 08:07:48 +0000 interesting similarities to "hardware partitioning" of IBM mainframes https://lwn.net/Articles/1038940/ https://lwn.net/Articles/1038940/ dale.hagglund <div class="FormattedComment"> IBM mainframes (I won't say for sure about modern ones, but certainly the the 370, 390, and compatible Amdahl systems I was aware of in the mid 80s at university) supported a feature where the hardware could be divided into "partitions", each of which could run a fully separate "real mode" OS instance. Again, I don't know this for sure, but I wouldn't be entirely surprised if there was some hardware help for controlling which cpus, memory, devices, etc, could be discovered by the os running in a particular partition. As I understand it, partitioning was commonly used for testing new releases of the os and related software, to separate production from dev development and test, and no doubt other reasons.<br> <p> Anyway, this new multi-kernel work could be used in many different and useful ways, as others have already noted, but it's always interesting to see how essentially every "new" idea has antecedents in the past.<br> </div> Sun, 21 Sep 2025 07:19:05 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038939/ https://lwn.net/Articles/1038939/ willy <div class="FormattedComment"> RCU was invented at Sequent (who were bought by IBM) and used in their Dynix/ptx kernel.<br> </div> Sun, 21 Sep 2025 06:04:33 +0000 Lots of use cases https://lwn.net/Articles/1038937/ https://lwn.net/Articles/1038937/ kazer <div class="FormattedComment"> <span class="QuotedText">&gt; two different kernels</span><br> <p> That second "foreign" kernel would need to understand the "partition" it is allowed to use so it won't try to take over rest of the machine where another kernel may be running. Unless there is a way to make hardware understand where that another one is allowed to run (basically selective removing of supervisor-rights from the foreign kernel).<br> <p> So I can only see that happening if the second kernel understands multikernel situations correctly as well. Otherwise it is back to hypervisor virtualization.<br> <p> <span class="QuotedText">&gt; old kernel</span><br> <p> Sorry, but for the reasons mentioned above (supervisor access to hardware) that old kernel would need to be multikernel compliant as well. Otherwise you need a plain old hypervisor for virtualization.<br> <p> </div> Sun, 21 Sep 2025 04:37:21 +0000 Some precendent for this in VMware's ESX kernel (version 5.0 and earlier) https://lwn.net/Articles/1038932/ https://lwn.net/Articles/1038932/ chexo4 <div class="FormattedComment"> IIRC this is how multi-core systems under the seL4 microkernel work. At least in some configurations. Something about it being simpler to implement probably.<br> </div> Sat, 20 Sep 2025 21:57:43 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038926/ https://lwn.net/Articles/1038926/ stephen.pollei <div class="FormattedComment"> I do seem to recall that it was for "locking complexity" reasons. If I recall correctly, around this time, there was the BKL and relatively fewer locks. With even just a BKL, it could scale to 2 to 4 cores/cpus with a lot of typical workloads. There was too much contention for the kernel to scale up to even the 12 to 16 core and beyond range effectively. Several people were of the opinion that Sun Solaris and others had their locks too fine-grained. For this reason, I think they tried to be very cautious in breaking up coarse-grained locks for finer-grained locks; they tried requiring that there were measurements on realistic loads that a lock was having contention or latency issues before they accepted patches to break it up. They tried to avoid too much locking complexity and over-head.<br> <p> I don't know enough to have an opinion on how Linux kernel was able to scale as successfully as it has. There were certainly doubts in the past. If I recall correctly, RCU was being used in other kernels before it was introduced in Linux, but I don't recall which ones.<br> </div> Sat, 20 Sep 2025 21:31:32 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038924/ https://lwn.net/Articles/1038924/ roc <div class="FormattedComment"> There are plenty of programs that work perfectly well with (e.g.) 200 threads on 200 cores, on hardware that exists today. Asking people to rewrite them to introduce a message-passing layer to get them to scale on your hypothetical cluster is a non-starter. Definitely a bug, not a feature.<br> <p> If the Linux kernel had been unable to scale well beyond 16 cores then this cluster idea might have been a viable path forward. But Linux did and any potential competitor that doesn't is simply not viable for these workloads.<br> </div> Sat, 20 Sep 2025 20:22:35 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038921/ https://lwn.net/Articles/1038921/ willy <div class="FormattedComment"> You're right; Larry wanted a cluster of SMPs. Now, part of that was trying to avoid the locking complexity cliff; he didn't want Solaris to turn into IRIX with "too many" locks (I'm paraphrasing his point of view; IRIX fanboys need not be upset with me)<br> <p> But Solaris didn't have RCU. I would argue that RCU has enabled Linux to scale further than Solaris without falling off "the locking cliff". We also have lockdep to prevent us from creating deadlocks (I believe Solaris eventually had an equivalent, but that was after Larry left Sun). Linux also distinguishes between spinlocks and mutexes, while I believe Solaris only has spinaphores. Whether that's terribly helpful or not for scaling, I'm not sure.<br> </div> Sat, 20 Sep 2025 18:59:32 +0000 Shared access? https://lwn.net/Articles/1038920/ https://lwn.net/Articles/1038920/ Lennie <div class="FormattedComment"> The old <a href="https://en.wikipedia.org/wiki/OpenSSI">https://en.wikipedia.org/wiki/OpenSSI</a> code is still available too<br> </div> Sat, 20 Sep 2025 18:38:59 +0000 Lots of use cases https://lwn.net/Articles/1038917/ https://lwn.net/Articles/1038917/ geofft For whatever reason Apple only enables it with the M3 chip and later, as <a href="https://developer.apple.com/documentation/virtualization/vzgenericplatformconfiguration/isnestedvirtualizationsupported">documented for the high-level Virtualization.framework's <code>VZGenericPlatformConfiguration.isNestedVirtualizationSupported</code></a>. <p> I also get <code>false</code> from the <a href="https://developer.apple.com/documentation/hypervisor/hv_vm_config_get_el2_supported(_:)?language=objc">lower-level Hypervisor.framework's <code>hv_vm_config_get_el2_supported()</code></a> on my machine. Sat, 20 Sep 2025 18:25:47 +0000 memory and devices https://lwn.net/Articles/1038916/ https://lwn.net/Articles/1038916/ willy <div class="FormattedComment"> You partition them. Assign various devices and memory to each kernel.<br> </div> Sat, 20 Sep 2025 17:34:37 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038915/ https://lwn.net/Articles/1038915/ quotemstr <div class="FormattedComment"> No it doesn't. You can have more threads than cores. If you mean that you can't get more than 16-way parallelism this way using threads: feature, not a bug. Use cross-machine distribution mechanism (e.g. dask) and handle work across an arbitrarily large number of cores across an arbitrarily large number of machines.<br> </div> Sat, 20 Sep 2025 17:10:15 +0000 Neat: but isn't this a type-1 hypervisor? https://lwn.net/Articles/1038902/ https://lwn.net/Articles/1038902/ ballombe <div class="FormattedComment"> This seems to preclude workloads that spawns more than 16 threads.<br> </div> Sat, 20 Sep 2025 15:29:08 +0000