LWN: Comments on "openMosix project shutting down" https://lwn.net/Articles/241832/ This is a special feed containing comments posted to the individual LWN article titled "openMosix project shutting down". en-us Mon, 03 Nov 2025 14:34:31 +0000 Mon, 03 Nov 2025 14:34:31 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Sad for many reasons. https://lwn.net/Articles/242231/ https://lwn.net/Articles/242231/ drag One of the most serious problems with OpenMosix is that they never got support for 2.6 kernel out the door. They were rocking with 2.4, but once they stopped having timely releases people's attention drifted to other things.<br> <p> <p> I don't know what the kerrighed folks are or are not aware of.. but certainly they were well aware of OpenMosix/Mosix and OpenSSI. The magical thing about Kerrighed is that they have worked out a way to manage memory in a distributed manner. This allows them to support cluster-wide multi-threading and other Unix-ish stuff. Mosix only supports load balancing forking applications.<br> Thu, 19 Jul 2007 13:54:11 +0000 openMosix project shutting down https://lwn.net/Articles/242227/ https://lwn.net/Articles/242227/ drag Because modern systems are much more limited by I/O speed rather then CPU capacity. It's only very specific workloads that would max out a 32-way x86 system, which is what will be affordable within a couple years. And those workloads are things that generally are best served with specific message-passing programs and specific scedualing rather then a 'Single System Image' hack (not saying that OpenMosix wasn't nice) for generic workloads.<br> <p> <p> The problem now is system I/O. Disk speeds, ram speeds, etc etc. The cpus are so fast that they outstrip the capabilities of the system to deliver information to them.<br> <p> <p> Of course you can parrallize the I/O for some types of jobs. And with a normal modern x86 computer that is easily possible. With the advent of SATA and PCI Express gone are the days were you'd exhaust the bandwidth of your PCI backbone with a mear 5 disks or so. And gone are the cabling limitations of PATA interface also, Sata is much more flexible.<br> <p> <p> It used to be that two single cpu boxes were cheaper then a single SMP box. Two SMP box cost less then 1 4-way box, etc. Nowadays this is less true, and soon it will not be true at all.<br> <p> <p> <p> But on the SSI front the biggest limitation is going to always be that darn Ethernet and TCP/IP. Bandwidth is somewhat a issue.. high speed TCP/IP is cpu intensive, but mostly it's latency. That latency kills most of the benifit of adding new machines to a Mosix-style cluster.. HPC Linux clusters have cluster-specific applications and batch scedualers that take this into account, but for things like Mosix the idea was to take generic applications and make them cluster-ish. So you see adding more machines isn't going to help the I/O limitations, it's just going to make it worse.<br> <p> <p> Until you get that high speed, low latency, low overhead machine to machine interconnect then the idea of having a bunch of machines as a single SMP/NUMA machine is not going to realy work out. <br> <p> Not that that isn't ever going to happen. It's possible some day, but I don't know of any technology on the horizon that would accomplish it.<br> <p> <p> Still though you still have the much more ambitious Kerrighed (which has distributed memory management for a much more SMP/NUMA-like cluster that can support threading were Mosix is forking-only) being developed for that day when cheap, fast interconnects would be aviable.<br> <p> At least that is the way I see things....<br> <p> Thu, 19 Jul 2007 13:41:28 +0000 openMosix project shutting down https://lwn.net/Articles/242228/ https://lwn.net/Articles/242228/ jschrod The HPC community is moving more and more to message-passing style -- read: MPI -- parallelism and SGI is the only player left that does larger-scale SSI. (Well, Cray as well; though the also emphasize their MPI-focused systems more and more.)<br> <p> One might moan it, but most SSI-based applications have been substituted by MPI-based applications in most HPC environments that I'm familiar with. That is, automotive industry, aeronautic industry, weather simulations, and molecular simulations. The military doesn't use it any more either, they're buying Blue Genes by the lot.<br> <p> So Moshe is probably basically right with his prediction, beyond the sh**ty state of the openMosix code base.<br> Thu, 19 Jul 2007 13:16:54 +0000 openMosix project shutting down https://lwn.net/Articles/242194/ https://lwn.net/Articles/242194/ evgeny <font class="QuotedText">&gt; The new orientation is to make Kerrighed a real production quality product by focusing on stabilization and port to SMP and 64 bits architectures as a short term objective.</font><br> <p> Thanks, good to know. Let's hope really soon I'll see my three x86_64 boxes as a single 14-core cruncher!<br> Thu, 19 Jul 2007 10:02:01 +0000 openMosix project shutting down https://lwn.net/Articles/242185/ https://lwn.net/Articles/242185/ rlottiau I don't see any logic neither in the Moshe Bar comment. Making cluster of multi-core multiprocessors is strongly desirable for those who are looking for high performance machines. But anyway, this is a sad news to see a SSI project dying, whatever the reasons mentioned by its project leader.<br> <p> Concerning the Kerrighed project, it has strongly changed its orientation last year. The project was initially a pure research project entirely supported by a French national research lab. With this research orientation, the developers was mainly interested in adding new fancy features instead of working on stabilization and port to newer kernels or architectures.<br> <p> Last year, the main Kerrighed architects have created a company to break this scheme. The new orientation is to make Kerrighed a real production quality product by focusing on stabilization and port to SMP and 64 bits architectures as a short term objective. To achieve this goal, some "fancy features" have been temporarily disabled and will be re-enabled in the near future. Of course, all the code will remain open-source and contributors are welcome !<br> <p> Thu, 19 Jul 2007 09:46:10 +0000 openMosix project shutting down https://lwn.net/Articles/241934/ https://lwn.net/Articles/241934/ tyhik "The direction of computing is clear and key developers are moving into newer virtualization approaches and other projects."<br> <p> Maybe this: openmosix is not a part of the stock kernel, in contrast to the recent virtualization patches, and it is perhaps just too much to chase kernel development all the time.<br> <p> In addition openmosix devs have undoubtedly learned their big bits from the project in the past, but perhaps not any more, and the fun factor is vanishing now.<br> Tue, 17 Jul 2007 10:00:12 +0000 Sad for many reasons. https://lwn.net/Articles/241925/ https://lwn.net/Articles/241925/ jd One sad part of all this is that Moshe Bar is a damn nice guy. It's hard enough when any good project loses momentum, but there are too few top developers who are also great as people. We can't keep losing them like this. <p> The second sad part is that whilst openMosix had problems, the code was freely and easily available. The MOSIX code is harder to obtain and the development model ties it too much to individuals. openMosix should have been largely immune from this problem, but regrettably wasn't immune enough for whatever reason. <p> The third sad part is that although Kerrighd is improving, I saw it demo'ed as SC|05. It was in a shocking state at that time and the developers seemed oblivious to other people's work in the field - poor research skills don't bode well for projects that are Very Hard computationally. <p> At the very least, the project should get listed on the Unmaintained Projects website and/or handed over to an interim rescue team to see what can be done. Tue, 17 Jul 2007 01:55:22 +0000 openMosix project shutting down https://lwn.net/Articles/241919/ https://lwn.net/Articles/241919/ briangmaddox I have to agree. I tried out OpenMosix but stayed with Mosix just because it worked better and worked on updated kernels. The newer kernel portion was important to me because I could incorporate a lot of the low latency patches and what not. OpenMosix just had this weird problem where a large number of processes started at once would hang the machine whereas Mosix handled it quite gracefully. In other areas Mosix also just seemed to perform better and was more stable. Things like mon worked under Mosix better than the OM equilivant.<br> <p> Mon, 16 Jul 2007 23:16:30 +0000 openMosix project shutting down https://lwn.net/Articles/241914/ https://lwn.net/Articles/241914/ evgeny <font class="QuotedText">&gt; The increasing power and availability of low cost multi-core processors is rapidly making single-system image (SSI) Clustering less of a factor in computing.</font><br> <p> ??? Don't see a logic here. "Availability of low cost multi-core processors" is just another consequence of the Moore's law. Why shouldn't I want to cluster N powerful boxes to get even a more powerful virtual machine? In fact I DO want - but openMosix, after many years, failed to even port (beyond some betas) to 2.6 (while the original Mosix has been running on 2.6 already for quite some time). Not to mention the permanently "almost ready" amd64 port. (To be honest, the latter seem to plague every other SSI project as well; e.g., anytime I look at Kerrighed it's said to have the 64bit port ready in a couple of months ;-))<br> Mon, 16 Jul 2007 22:01:11 +0000 openMosix project shutting down https://lwn.net/Articles/241918/ https://lwn.net/Articles/241918/ xorbe Shouldn't they now cluster multicore systems? Why stop?<br> Mon, 16 Jul 2007 21:59:46 +0000