|
|
Log in / Subscribe / Register

Nice to see an update

Nice to see an update

Posted Apr 3, 2026 19:20 UTC (Fri) by joed (subscriber, #139219)
Parent article: No kidding: Gentoo GNU/Hurd

I saw the sys-kernel/hurd kernel show up in my portage tree a week or two ago and was wondering when we'd see some news about it.

I'm looking forward to playing with it over my Easter vacation. Feels like it's seeing a bit of a resurgence now as the amd64 support begins to stabilize.


to post comments

Nice to see an update

Posted Apr 4, 2026 15:49 UTC (Sat) by Vorpal (guest, #136011) [Link] (25 responses)

x86-64 was first introduced in 2003 (announced in 1999 though apparently), so Hurd is over 2 decades late on this. I don't know the story behind that, but why would anyone spend effort on this rather than one of the many other faster moving OSes at this point? Not just Linux, but there is all the BSDs, there is Redox, Haiku, etc.

Nice to see an update

Posted Apr 4, 2026 17:36 UTC (Sat) by jmalcolm (subscriber, #8876) [Link] (21 responses)

> I don't know the story behind that

Why is GNU HURD late on 64 bit support? Simply becuase it was started much earlier as a 32 bit project and it has had limited developer attention.

The GNU Project was started in 1983 with the goal of creating a fully Free Software operating system. As part of that, work began on GNU HURD in 1990 with the goal of making it the kernel for the GNU OS. It has been in development since then. However, when the Linux kernel appeared a year later, people combined the Linux kernel with the rest of the GNU Project to form the earliest Linux distributions. The energy and excitement moved to Linux and work on GNU HURD stalled. But enthusiasts never completely forgot about it and it has moved along very slowly. It is now getting close to a point you could consider useful and so we are seeing it get more attention and are starting to see distributions that replace the Linux kernel with GNU HURD.

> why would anyone spend effort

There are probably as many reasons as people involved. Finishing the job for one and creating a true "GNU OS" instead of having to pass off Linux as "GNU/Linux". If you are a true GPL adherent, Linux is not actually a pure GPL operating system as it allows non-free components. The FSF blesses only a small number of Linux distros as "Free Software". GNU HURD is unmodified GPLv2. So fewer imperial entanglements in theory.

GNU HURD has a microkernel design so, despite being older, is an arguably more "modern" design than Linux is. Linux has modules and FUSE but is otherwise monolithic. RedoxOS is also a microkernel design but it brings permissive licensing and the Rust language. I see those as advantages but many do not and so will find GNU HURD attractive. If you have some vision, you could believe that GNU HURD will ultimately be a better operating system than Linux.

And, as I said, I am sure there are a thousand other reasons.

As a technology and as a fit for the originally stated goals of the Free Software Foundation, I consider RedoxOS to be the modern replacement for the GNU Project. However, RedoxOS is permissively licensed and so politically incompatible with the world view of many fans of the GPL. RedoxOS provides all "4 freedoms" that the FSF talks about and satisfies every word of "What is Free Software" on the FSF website. But, for many, only copyleft licenses are acceptable and so GNU HURD will be considered superior to RedoxOS.

The above paragraph applies to the BSD operating systems as well. BSD was Open Source long before GNU HURinD and the BSD kernel was actually considered for the GNU OS before work began on GNU HURD. When the AT&T lawsuit was settled and FreeBSD was released in 1993, it was far more advanced and complete than Linux was. But momentum is a powerful force and BSD never managed to pull the spotlight off Linux. RedoxOS and even GNU HURD will face the same challenge. GNU HURD may actually have the better shot, indepedent of quality or technology, as the GNU brand is already strong in the Linux world and GNU has many fans in the Linux userbase. The fact that Arch, Debian, Guix, and now Gentoo offer GNU HURD in their projects illustrates this.

Nice to see an update

Posted Apr 5, 2026 15:04 UTC (Sun) by rsidd (subscriber, #2582) [Link] (2 responses)

Thanks for the long explanation. In my opinion, this

> creating a true "GNU OS" instead of having to pass off Linux as "GNU/Linux"

could be read in many ways, but I choose to read it in terms of the reality of the last 30+ years: "Linux" is not a kernel but a collection of operating systems with the Linux kernel at their core. Since at least the mid 1990s, GNU utilities have been developed with the idea of being used in Linux, and (particularly with glibc) the development has been concurrent with Linux kernel development. Entire subsystems have been developed specifically for Linux, from systemd to audio (jack, pulse, pipewire...) to even display protocols (wayland -- which does work incidentally with other systems but Linux is primary). So, in 2026, not only is it wrong to « pass off Linux as "GNU/Linux » but it would be appropriate to call a Hurd-based Gentoo or Debian system "Linux/Herd", because the userland is Linux.

Why bother with Hurd? "Because it is conceptually interesting" is a strong argument. "Because people can spend their time how they like" is a valid argument. "Because it is copyleft" is not a useful argument at all. If it had been strictly GPL v2, it would have been helpful because it could have borrowed code from the Linux kernel (imagine a Hurd with a driver compatibility layer for Linux drivers). But it can't. As such, copyleft has no practical advantage over RedoxOS or BSD, who ensure their core is free software, but use persuasion, not legal compulsion, to encourage others to contribute likewise.

Nice to see an update

Posted Apr 5, 2026 17:43 UTC (Sun) by sam_c (subscriber, #139836) [Link] (1 responses)

> If it had been strictly GPL v2, it would have been helpful because it could have borrowed code from the Linux kernel (imagine a Hurd with a driver compatibility layer for Linux drivers).

The Hurd did do this, and still does to an extent, so is not currently purely GPL v*: it is however in the process of migrating to rumpkernel, which allows to run NetBSD drivers in userland.

Importing drivers from Linux (both in gnumach and also in userland via netdde) has proved too complex as Linux moves quickly.

Nice to see an update

Posted Apr 5, 2026 18:12 UTC (Sun) by Wol (subscriber, #4433) [Link]

> Importing drivers from Linux (both in gnumach and also in userland via netdde) has proved too complex as Linux moves quickly.

Given that I thought the whole point of a microkernel was to move everything possible (ie drivers etc) into userspace, this is probably the main reason why HURD doesn't use linux drivers (assuming that is true in the first place).

If a driver is compiled as a stand-alone executable, then its licence (which is NOT necessarily GPLv2) will not impact on the HURD licence (or vice versa).

(I gather there's a fair bit of MIT/BSD/v2+ code in linux, depending on the authors' whims. It's just that it's all compatible with pure v2.)

Cheers,
Wol

Nice to see an update

Posted Apr 5, 2026 17:44 UTC (Sun) by sam_c (subscriber, #139836) [Link]

>Why is GNU HURD late on 64 bit support? Simply becuase it was started much earlier as a 32 bit project and it has had limited developer attention.

Right, a small group of developers, and initial 64-bit porting to any target is quite a pain. Samuel talks about it in more detail in his recent FOSDEM talk too: https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_...

Nice to see an update

Posted Apr 5, 2026 17:58 UTC (Sun) by willy (subscriber, #9762) [Link] (16 responses)

I don't find the HURD architecture terribly compelling. I did when I was younger, but now I'm old and cynical, I don't see the advantage to the user of splitting the kernel services into a bunch of loosely coupled components.

It reminds me a lot of the current fad for microservices. Instead of having a monolithic server which you depend on, now you have dozens of independent services each of which is critical. Now they all have to be managed, and you depend on all of them to be working. They spend a lot of time sending each other messages, but reliability somehow never seems to increase.

So that's my architectural problem with Hurd. The practical problem with Hurd is that nobody is interested in writing device drivers for it. According to the web pages, there's three drivers they took from the NetBSD rump kernel and, er, that's it. An NVMe driver is about a thousand lines of code (assuming you don't bother to support the insane NVMEoF extensions). And nobody's done the work to write it!

The web page even unironically says "The Hurd supports modern SATA devices like SSDs". I helped kill SATA over a decade ago (I was on the phone call with the members of the SATA committee who were working on 12Gbit SATA when they agreed that it was pointless to continue because they were all going to use NVMe).

To be clear, I don't hold these opinions because I work on Linux. I work on Linux because I hold these opinions. Before Linux, I worked on a couple of esoteric operating systems. I decided to work on Linux because I thought it was going to be a success, and I wanted to ship code that people would actually use.

Nice to see an update

Posted Apr 6, 2026 3:43 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (14 responses)

> The web page even unironically says "The Hurd supports modern SATA devices like SSDs". I helped kill SATA over a decade ago

Thank you for that. Not.

NVMe is absolutely terrible. It has to be pre-routed because cables are fragile and prone to interference. Meanwhile, SATA cables are robust and flexible.

Moreover, NVMe can't be easily multiplexed. You can't easily have an external multi-drive enclosure with a single port. As a result, the only reasonable standard for high-speed drives now is SAS.

Nice to see an update

Posted Apr 6, 2026 15:46 UTC (Mon) by willy (subscriber, #9762) [Link] (13 responses)

I didn't have anything to do with the hardware form factors. But let's not pretend the situation before NVMe was good. The SATA connector was notorious for being easily dislodged. At Intel, the database team used SAS expanders with SATA drives plugged into them.

There are products using Thunderbolt cables to NVMe drives, eg:
https://www.amazon.ca/Duplicator-Enclosure-External-Capac...

Presumably there's not enough demand to produce this kind of product in a rack mount form factor, or I just suck at searching.

It's kind of funny; the original product that NVMe was developed for was a RAID controller. Obviously things changed ...

Nice to see an update

Posted Apr 6, 2026 17:51 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

> I didn't have anything to do with the hardware form factors

Sure. The physical SATA connector was not the best. But it was _better_ than whatever came after. It still _is_ better than anything newer.

An expansion to 12Gbps (and then probably to 20Gbps that the SATA cable could have electrically handled) would have provided a nice upgrade path. Instead, we have a cliff where there is nothing past the unacceptably slow SATA and unaffordably expensive NVMe.

> There are products using Thunderbolt cables to NVMe drives

No, there aren't. This product is a separate computer with a full-blown CPU and several gigs of RAM for buffering that provides a Thunderbolt interface. That's also why it costs as much as a separate computer and why it's not feasible at the rack-scale.

You'd be far better off buying dedicated NVMe SAN devices with pre-routed ports.

> Presumably there's not enough demand to produce this kind of product in a rack mount form factor, or I just suck at searching.

NVMe is a PCIe bus form factor, so it can't be feasibly made modular.

Nice to see an update

Posted Apr 7, 2026 1:00 UTC (Tue) by intelfx (subscriber, #130118) [Link] (11 responses)

> This product is a separate computer with a full-blown CPU and several gigs of RAM for buffering

So, just like any *target* that would be hanging off the NVMe end of the IC?

> it's not feasible at the rack-scale

How come it's feasible to buy O(N) NVMe SSDs, but not feasible to buy O(N) Thunderbolt target ICs?

An ASM2464 costs like $10 in bulk. That's nothing compared to the cost of the actual hardware on both sides of it.

You have strong opinions about NVMe, fine, but don't try to invent excuses for it that don't stand any sort of scrutiny :-)

Nice to see an update

Posted Apr 7, 2026 1:52 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

> So, just like any *target* that would be hanging off the NVMe end of the IC?

It's an _additional_ computer.

> How come it's feasible to buy O(N) NVMe SSDs, but not feasible to buy O(N) Thunderbolt target ICs?

You need more than a Thunderbolt IC. You also need a full USB-C PHY with all its complexity. It ends up being expensive. There are also no standard multiplexing protocols for it.

As a result, high-end HDDs are now going to SAS (Serial-Attached SCSI) that scales to 24Gbps and supports multiple devices on a single link. But it's a separate standard with physically incompatible connectors, so it's almost entirely absent on consumer hardware.

Nice to see an update

Posted Apr 7, 2026 2:30 UTC (Tue) by intelfx (subscriber, #130118) [Link] (9 responses)

> You need more than a Thunderbolt IC. You also need a full USB-C PHY with all its complexity. It ends up being expensive. There are also no standard multiplexing protocols for it.

Look, the point was not that rack-scale systems should use Thunderbolt as it is. Thunderbolt is obviously a consumer technology that solves consumer use-cases.

The point was that it does not take some sort of obscenely profligate technology to route PCIe over a non-fragile cable with a reasonable connector and multiplexing.

Nice to see an update

Posted Apr 7, 2026 2:32 UTC (Tue) by intelfx (subscriber, #130118) [Link] (8 responses)

> There are also no standard multiplexing protocols for it.

Also, a "multiplexing protocol" is literally built into PCIe. The entity implementing it is called a PCIe switch.

Nice to see an update

Posted Apr 7, 2026 3:39 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

> Also, a "multiplexing protocol" is literally built into PCIe. The entity implementing it is called a PCIe switch.

Sigh. Have you _tried_ them? PCIe requires very tight bus arbitration rules and very fast signaling.

So you can easily get devices that simply split one x16 link into, say, four x4 links. But if you want a _true_ switch, you are in the realm of "Call us for pricing". Even the very simplest devices that do true switching cost more than a $1000: https://www.amazon.com/HighPoint-Technologies-Rocket-8-M-...

So yep, NVMe sucks. It's a bad standard for _drives_.

Nice to see an update

Posted Apr 7, 2026 9:13 UTC (Tue) by farnz (subscriber, #17727) [Link] (5 responses)

Highpoint are not the cheapest make, by a long shot. For example, this is a simple device based around the PEX88024 PCIe switch chip (which is a true switch, not a splitter), and that costs $200.

Nice to see an update

Posted Apr 7, 2026 10:26 UTC (Tue) by pizza (subscriber, #46) [Link] (4 responses)

> Highpoint are not the cheapest make, by a long shot. For example, this is a simple device based around the PEX88024 PCIe switch chip (which is a true switch, not a splitter), and that costs $200.

While that is certainly cheaper, you still managed to prove Cyberax's point though.

(A SAS controller+cables are not only cheaper, but SAS drives are also far cheaper on a $/TB basis. And won't wear out on you like flash will..)

Nice to see an update

Posted Apr 7, 2026 10:59 UTC (Tue) by intelfx (subscriber, #130118) [Link] (3 responses)

> (<...> SAS drives are also far cheaper on a $/TB basis. And won't wear out on you like flash will..)

That's entirely unrelated to the choice of bus/interface.

Sigh. If you argue, please argue *correctly*.

Nice to see an update

Posted Apr 7, 2026 11:06 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> That's entirely unrelated to the choice of bus/interface.

Convenient, you cut out the directly-related first half of my reply.

> Sigh. If you argue, please argue *correctly*.

...Please follow your own advice before casting shade at others.

Meanwhile, are you seriously saying that "total solution cost" isn't a factor in bus/interface selection?

Nice to see an update

Posted Apr 7, 2026 11:35 UTC (Tue) by intelfx (subscriber, #130118) [Link] (1 responses)

> Convenient, you cut out the directly-related first half of my reply.

Because it's not the one I want to respond to. That point is discussed in the other subthread. And yes, it is somewhat more expensive, by virtue of NVMe being a more capable interface. That's basic economy.

> Please follow your own advice before casting shade at others.

I'm not casting shade, I'm directly saying that (a part of) your argument is invalid.

> Meanwhile, are you seriously saying that "total solution cost" isn't a factor in bus/interface selection?

You argued for SAS against NVMe on the basis that "SAS drives" (which you implied to be HDDs) are cheaper and supposedly more reliable than "NVMe drives" (which you implied to be flash-based). Both implications are invalid to make in an argument about *bus interfaces*.

Your argument is akin to saying that because a Ferrari is more expensive than a tricycle (while you are content with the latter), three wheels are better than four.

Nice to see an update

Posted Apr 7, 2026 11:55 UTC (Tue) by Wol (subscriber, #4433) [Link]

> You argued for SAS against NVMe on the basis that "SAS drives" (which you implied to be HDDs) are cheaper and supposedly more reliable than "NVMe drives" (which you implied to be flash-based). Both implications are invalid to make in an argument about *bus interfaces*.

Not only invalid in a an argument over buses, but many many years ago, when somebody did a shoot-out over flash drives, I was left with the very strong impressions that flash drives outperformed rotating rust for reliability and longevity ... (okay, not the cheap bargain basement stuff). The only advantage rust has over flash, once you are paying for decent quality, is rust is a lot cheaper.

Cheers,
Wol

Nice to see an update

Posted Apr 7, 2026 10:57 UTC (Tue) by intelfx (subscriber, #130118) [Link]

> Sigh. Have you _tried_ them?

I did, in fact. (Well, not me specifically, but I was in charge of writing a firmware for a device with one and did see the sausage being made.) It's not that costly. And you don't need one per drive.

> So you can easily get devices that simply split one x16 link into, say, four x4 links

Yeah, that's not a switch, that's a splitter.

> So yep, NVMe sucks. It's a bad standard for _drives_.

You still haven't managed to illustrate your point in any larger capacity than "a strong opinion".

Nice to see an update

Posted Apr 11, 2026 17:06 UTC (Sat) by cesarb (subscriber, #6266) [Link]

> I don't find the HURD architecture terribly compelling. I did when I was younger, but now I'm old and cynical, I don't see the advantage to the user of splitting the kernel services into a bunch of loosely coupled components.

IIRC, the biggest advantage of the Hurd design was that you don't have to be root to use your own translators. For instance, you could implement your own filesystem as a normal user and mount it somewhere within your home directory, without having to beg the sysadmin for the necessary access.

Yes, it's solving problems we no longer have; not only are most of us our own sysadmin (running Linux on our own desktops and laptops instead of using a terminal to a central minicomputer), but also Linux has long grown several mechanisms (in my example, mostly FUSE, but we also have bind mounts, user namespaces, and so on) to do the things Hurd could in theory do with translators.

Nice to see an update

Posted Apr 4, 2026 17:41 UTC (Sat) by joed (subscriber, #139219) [Link] (1 responses)

Just for fun?

Nice to see an update

Posted Apr 5, 2026 17:46 UTC (Sun) by sam_c (subscriber, #139836) [Link]

Precisely why I brought it to Gentoo indeed. It's fun and I find its properties cool.

I wanted a recreational project to unwind and this felt fitting.

x86-64 was first introduced in 2003

Posted Apr 17, 2026 9:27 UTC (Fri) by geert (subscriber, #98403) [Link]

> x86-64 was first introduced in 2003 (announced in 1999 though apparently), so Hurd is over 2 decades late on this.

Yeah, Intel was late to the party ;-)
https://en.wikipedia.org/wiki/64-bit_computing#64-bit_dat...


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds