|
|
Log in / Subscribe / Register

The second half of the 7.0 merge window

By Daroc Alden
February 23, 2026

The 7.0 merge window closed on February 22 with 11,588 non-merge commits total, 3,893 of which came in after the article covering the first half of the merge window. The changes in the second half were weighted toward bug fixes over new features, which is usual. There were still a handful of surprises, however, including 89 separate tiny code-cleanup changes from different people for the rtl8723bs driver, a number that surprised Greg Kroah-Hartman. It's unusual for a WiFi-chip driver to receive that much attention, especially a staging driver that is not yet ready for general use.

The most important changes included in this release were:

Architecture-specific

Core kernel

Filesystems and block I/O

Hardware support

  • Clock: Qualcomm Kaanapali clock controllers, Qualcomm SM8750 camera clock controllers, Qualcomm MSM8940 and SDM439 global clock controllers, Google GS101 DPU clock controllers, SpacemiT K3 clock controllers, Amlogic t7 clock controllers, Aspeed AST2700 clock controllers, and Loongson-2K0300 real-time clocks.
  • GPIO and pin control: Spacemit K3 pin controllers, Atmel AT91 PIO4 SAMA7D65 pin controllers, Exynos9610 pin controllers, Qualcomm Mahua TLMM pin controllers, Microchip Polarfire MSSIO pin controllers, and Ocelot LAN965XF pin controllers.
  • Graphics: Mediatek MT8188 HDMI PHYs, Mediatek Dimensity 6300 and 9200 DMA controllers, Qualcomm Kaanapali and Glymur GPI DMA engines, Synopsis DW AXI Agilex5 DMA devices, Atmel microchip lan9691-dma devices, and Tegra ADMA tegra264 devices.
  • Industrial I/O: AD18113 amplifiers, AD4060 and AD4052 analog-to-digital converters (ADCs), AD4134 24-bit 4-channel simultaneous sampling ADCs, ADAQ767-1 ADCs, ADAQ7768-1 ADCs, ADAQ7769-1 ADCs, Honeywell board-mount pressure and temperature sensors, mmc5633 I2C/I3C magnetometers, Microchip MCP47F(E/V)B(0/1/2)(1|2|4|8) buffered-voltage-output digital-to-analog converters (DACs), s32g2 and s32g3 platform ADCs, ADS1018 and ADS1118 SPI ADCs, and ADS131M(02/03/04/06/08)24-bit simultaneous sampling ADCs.
  • Input: FocalTech FT8112 touchscreen chips, FocalTech FT3518 touchscreen chips, and TWL603x power buttons.
  • Media: MediaTek MT8196 video companion processors and external memory interfaces.
  • Miscellaneous: Foresee F35SQB002G chips, TI LP5812 4x3 matrix RGB LED drivers, Osram AS3668 4-channel I2C LED controllers, Qualcomm Interconnect trace network on chip (TNOC) blocks, and DiamondRapids non-transparent PCI bridges.
  • Networking: Glymur bandwidth monitors, Glymur PCIe Gen4 2-lane PCIe PHYs, SC8280xp QMP UFS PHYs, Kaanapali PCIe PHYs, and TI TCAN1046 PHYs.
  • Power: ROHM BD72720 power supplies, Rockchip RK801 power management integrated circuits (PMICs), ROHM BD73900 PMICs, Delta Networks TN48M switch complex programmable logic devices (CPLDs), sama7d65 XLCD controllers, and Congatec Board Controller backlights.
  • USB: AST2700 SOCs, USB UNI PHY and SMB2370 eUSB2 repeaters, QCS615 QMP USB3+DP PHYs, SpacemiT PCIe/combo PHY and K1 USB2 PHYs, Renesas RZ/V2H(P) and RZ/V2N USB3 devices, Google Tensor SoC USB PHYs, and Apple Type-C PHYs.

Miscellaneous

Networking

Virtualization and containers

  • KVM has added a mask to correctly report CPUCFG bits on LoongArch, which provides LoongArch guests with the correct information about the CPU's capabilities.
  • Some AMD CPUs support Enhanced Return Address Predictor Security (ERAPS) — a feature that removes the need for some security-related flushes of CPU state when a guest exits back to the host operating system. KVM added support for using ERAPS, and for advertising that support to guests.
  • There's a new user-space control to configure KVM's end of interrupt (EOI) broadcast suppression (which prevents an EOI signal from being sent to all interrupt controllers in a system, making interrupts more efficient). Previously, KVM erroneously advertised support for EOI broadcast suppression, even though it wasn't fully implemented. Unfortunately, the flaw persisted long enough that some user-space applications came to depend on the behavior, so now that the feature has been implemented correctly, user-space programs will have to opt in when configuring KVM virtual machines.
  • Guests can now request full ownership of performance monitoring unit (PMU) hardware, which provides more accurate profiling and monitoring than the existing emulated PMU.
  • The kernel's Hyper-V driver has added a debugfs interface to view various statistics about the Microsoft hypervisor.

Internal kernel changes

  • The kernel has been almost entirely switched over to kmalloc_obj() through the use of Coccinelle. Allocations of structure types and union types have all been converted, but allocations of scalar types, which need manual checking, have been left alone. Linus Torvalds followed up with a handful of fixes for the problems that inevitably crop up after this kind of large change. The new interface allocates memory based on the size of the provided type, which should mean fewer mistakes with manual size calculations. Where one would previously write one of these:
        ptr = kmalloc(sizeof(*ptr), gfp);
        ptr = kmalloc(sizeof(struct some_obj_name), gfp);
        ptr = kzalloc(sizeof(*ptr), gfp);
        ptr = kmalloc_array(count, sizeof(*ptr), gfp);
        ptr = kcalloc(count, sizeof(*ptr), gfp);
        ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
    
    One can now write:
        ptr = kmalloc_obj(*ptr, gfp);
        ptr = kmalloc_obj(*ptr, gfp);
        ptr = kzalloc_obj(*ptr, gfp);
        ptr = kmalloc_objs(*ptr, count, gfp);
        ptr = kzalloc_objs(*ptr, count, gfp);
        ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
    
    GFP_KERNEL is also now the default, and can be left out if that is the only memory allocation flag that one wishes to set.

The second-half of the merge window is often quieter, and this one was no exception. There were a number of debugging features added, however, which is always nice to see. At this point, the kernel will go through the usual seven-or-eight release candidates as people chase down bugs introduced in this merge window. The final v7.0 kernel should be expected around April 12 or 19.


to post comments

I smell slop

Posted Feb 24, 2026 1:57 UTC (Tue) by chexo4 (subscriber, #169500) [Link] (3 responses)

> There were still a handful of surprises, however, including 89 separate tiny code-cleanup changes from different people for the rtl8723bs driver, a number that surprised Greg Kroah-Hartman. It's unusual for a WiFi-chip driver to receive that much attention, especially a staging driver that is not yet ready for general use.

That’s very odd. Unless these people are employed by the manufacturer of that device I’d suspect them of trying to vibe code some low hanging fruit.

I smell slop

Posted Feb 24, 2026 14:10 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Back in college, part of an OSS class I was observing was to do some cleanup to staging drivers in the kernel. Networking devices were ripe for the picking. So it could be a class project/assignment?

My first series (likely need to be a subscriber): https://lwn.net/ksdb/releases/v3.13/commits?dev=11170

I smell paranoia

Posted Feb 25, 2026 2:57 UTC (Wed) by fratti (subscriber, #105722) [Link]

The much more likely explanation, given the multitude of authors and one patch's "Suggested-by" crediting Dan Carpenter of Linaro, is that this was a Linux kernel development workshop project.

> Unless these people are employed by the manufacturer of that device

Instead of speculating, you can simply run

git log v6.19..v7.0-rc1 -- drivers/staging/rtl8723bs/

and see that this is not the case.

I just watched veritasium's episode on xz

Posted Feb 26, 2026 22:16 UTC (Thu) by tbuskey (subscriber, #175100) [Link]

But its probably slop or a class workshop

Space

Posted Feb 24, 2026 2:13 UTC (Tue) by jmalcolm (subscriber, #8876) [Link] (44 responses)

In addition to the Rust stuff, what makes this kernel historic in my view is support for the SpacemiT K3.

https://lore.kernel.org/lkml/a9a6b840-5a4f-4d27-8b34-da82...

This is the first RISC-V RVA23 SoC and, while it is also fully supported in Ubuntu 26.04, it has mainline support in the 7.0 kernel.

In my view, here is a list of CPUs that introduced new ISAs that changed the desktop computing landscape:
MOS 6502
Intel 8086 (the "PC")
Motorola 68000 (first popular "32 bit" ISA)
Intel 80386 (ubiquitous 32 bit desktop computing)
AMD Athlon (amd64 - later x86-64)
Apple M1 (brought ARM64 to the desktop)
SpacemiT K3 (first RISC-V RVA23)

Better "desktop" chips will come later but this is the architecture those chips will support. You will run the same software on them that runs on the K3 now. And there is an announced K3 laptop so I am counting this as a desktop architecture.

https://deepcomputing.io/dc-roma-risc-v-mainboard-iii-unv...

Is everybody buying a SpacemiT K3 to run as their daily driver. Absolutely not. Single core performance is about on par with a 2010 Macbook. But multi-core performance is a bit better than a 2019 Macbook Air. And the AI performance is on par with an NVIDIA Orin Nano or Snapdragon X Elite which is pretty damn good. Most importantly, RVA23 has all the features one would expect from a modern desktop chip including security, video support, virtualization, and even 1024 bit vectors. As noted, mainstream Linux distros will support it out-of-the-box. Future RISC-V chips will run the same software and future software will run on the SpacemiT K3.

The K3 is already good enough for some of us and RVA23 chips are only going to get better from here. Atlantis, the first RVA23 Tenstorrent Ascalon in silicon, will appear later this year with single-core performance similar to Ryzen 5. I truly believe RVA23 is going to be one of the most important platforms in the future. And kernel 7.0 is where the story begins on Linux.

It is worth noting.

Space

Posted Feb 24, 2026 14:40 UTC (Tue) by pizza (subscriber, #46) [Link] (39 responses)

> SpacemiT K3 (first RISC-V RVA23)

I genuinely don't understand what makes this so special as to "Change the desktop computing landscape".

Because no matter how "open" the core ISA is, that doesn't mean squat for the rest of the SoC, which in the case of RISC-V, remains as hacky and/or proprietary as ever.

Even your "changed the desktop" original Apple M1 is a proprietary mess that _still_ isn't fully usable by an F/OSS operating system. It wasn't the AArch64 ISA (or even the specific microarchitectural implementation) that makes the M1 so much better than the competition; it was the rest of the SoC -- bespoke accelerator blocks tightly integrated into the proprietary operating system and its use of non-upgradeable on-package high bandwidth RAM. (And let's not forget that they also bought out the entire first few batches of TSMC's most advanced processes of the time!)

Outside of the server world where Arm (and more importantly, OS vendors) mandates support for UEFI+ACPI from hardware vendors, Arm-based platforms remain a total shitshow when it comes to commodity OS support, with the Apple M family being amongst the worst. RISC-V will remain in that same boat as long as its various SoC vendors just s/Arm/RISC-V/. Heck, I expect this to be _worse_; at least Arm has a stick to threaten its licensees into compliance with; with RISC-V it will be purely voluntary. And we've seen how well that works.

Space

Posted Feb 24, 2026 19:41 UTC (Tue) by pm215 (subscriber, #98099) [Link] (23 responses)

Personally I've always felt that support for ACPI or other standardisation is not mandated by either the architecture owner or the OS vendors: it's the end customers who demand it, or not. In server, nobody buying server systems is going to tolerate "this needs our random BSP fork and custom sauce", so the vendors of servers demand from the people they're buying from like SoC manufacturers some standardisation. In the x86 desktop market similarly customer demand is that your stuff can boot a stock OS so you can't sell anything else (and where there is less customer demand, e.g. oddball stuff like fingerprint sensors where the vendor can just say "we ship it with windows drivers to make that work" there is correspondingly less standardisation). In embedded the people buying embedded devices don't care about being able to install other stuff, as long as the cool hardware features work: so there is no pressure to standardisation and away from embedded nonsense hacks.

Where riscv is in markets that tolerate a wild west, you'll get a wild west; when it moves into markets that don't tolerate it, somebody will sort out the standards and vendors will follow them to get the sales. Who owns (or doesn't own) the CPU architecture won't make much difference in my view.

Space

Posted Feb 24, 2026 20:11 UTC (Tue) by pizza (subscriber, #46) [Link] (22 responses)

> Where riscv is in markets that tolerate a wild west, you'll get a wild west; when it moves into markets that don't tolerate it, somebody will sort out the standards and vendors will follow them to get the sales.

Well, sure. Except that's no different than the current [desktop] Arm situation. RISC-V (and even RV23 specifically) brings nothing new to the desktop market to offset the downsides of incompatibility with existing binaries.

x86's "openness" was an historical outlier; instead the overwhelming norm is that the hardware vendor also provides the (sole) operating system option.

Space

Posted Feb 24, 2026 20:50 UTC (Tue) by pm215 (subscriber, #98099) [Link]

"no different to arm here" was basically my point, yes :)

ACPI

Posted Feb 24, 2026 21:17 UTC (Tue) by jmalcolm (subscriber, #8876) [Link] (19 responses)

> Except that's no different than the current [desktop] Arm situation

I completely disagree.

If you choose Apple, you are stuck with Apple Silicon. They are a single provider. If you want ACPI and they don't, too bad for you. End of story.

If Apple Silicon were based on RVA23, anybody could provide an ACPI capable desktop and you could move to it. If this were popular, Apple would likely have to follow suit. See the difference?

Same with Windows. If you buy a Qualcomm X2 Elite, all your applications are Windows on ARM. You at least have a few laptop suppliers to choose from but, again, Qualcomm is the only game in town. Theoretically ARM themselves could produce a competing chip I guess. But that is it. And, again, if Qualcomm do not behave the way you want--too bad.

At least on Windows, there is "some" competition between Qualcomm and Intel and AMD. So there will be some positive competitive pressure there. And there are at least multiple laptop makers. It may be enough to get you ACPI. But maybe not. We will see.

ACPI

Posted Feb 24, 2026 21:33 UTC (Tue) by pizza (subscriber, #46) [Link]

> If Apple Silicon were based on RVA23, anybody could provide an ACPI capable desktop and you could move to it. If this were popular, Apple would likely have to follow suit. See the difference?

So.. if Apple was different, it would be different?

Again, Apple's value proposition is that it is a totally vertical integrated system. The hardware is bespoke and proprietary, and other than the core ISA nobody else has access to _any_ of the Apple peripheral IP.. and even software APIs! They are further reinforcing their moat with every passing year.

Nobody else is trying to emulate the original PC approach. Instead, everyone is trying to be an Apple.

ACPI

Posted Feb 25, 2026 15:39 UTC (Wed) by fratti-co (subscriber, #175548) [Link] (17 responses)

Why are you people so obsessed with ACPI? Having ACPI won't magically give you mainline driver support for all the SoC peripherals.

It gives you a buggy vendor blob that does regulators and pincontrol. Regulators and pincontrol are the easy part, they can also be done in a vendor blob with ARM's SCMI. By not doing them in a vendor blob that always runs in the background you don't have to figure out some harebrained mutual exclusion scheme over the resources. (Think a regulator controlled over I2C; now the kernel can't use that I2C controller because the firmware can touch its registers out from under it. If the kernel needs to talk to some other device on that bus because it's daisy-chained, you now need to do mutual exclusion with the firmware blob.)

ACPI

Posted Feb 25, 2026 16:13 UTC (Wed) by pizza (subscriber, #46) [Link] (15 responses)

> It gives you a buggy vendor blob that does regulators and pincontrol.

Not exactly. It gives you a vendor blob that is *embedded into the hardware* that exports a standard API that allows OSes to perform high-level interactions with said hardware without needing to know anything about the specific combination of regulators/gpios/etc and the nuances needed to sequence everything properly.

It's why you can take a random x86 motherboard produced (eg) 5 years after Windows 7 (or RHEL6 or whatever) was released and expect it to not only boot but generally function properly, including tings like suspend/resume. You may need newer drivers for major external peripherals (eg the GPU) but the base system still _works_.

In the Arm world, this "can boot out of the box into a commodity OS" only exists for big iron servers; everything else is a bespoke crapshow that is only usable _after_ support for that specific system is added to mainline Linux. (as opposed to a special snowflake vendor build that more often than not never gets a single update. Or even source code. Please, ask me how I know this)

ACPI

Posted Feb 25, 2026 18:15 UTC (Wed) by fratti (subscriber, #105722) [Link] (14 responses)

No, I know how platform firmware works, and this is a myth and simply not true.

> that is *embedded into the hardware*

It is not. If you turn on an x86 PC, there is a whole bunch of stuff that happens before any bootloader or kernel ever gets to run code; that's all software. This includes whatever implements ACPI. You can run ACPI on anything, it's just that it didn't come from the factory with a certain ACPI description preloaded, so you'd end up making your own, at which point you're better off using DT, as that's an open standard developed on an open mailing list.

> that exports a standard API

It's a Microsoft/Intel standard, but you can't describe non-x86 SoC platforms the same way with it. You can try and end up with a bunch of vendor specific stuff, which you will then need to handle in your OS anyway.

> allows OSes to perform high-level interactions with said hardware without needing to know anything about the specific combination of regulators/gpios/etc and the nuances needed to sequence everything properly.

Again, this is not the hard part. Device tree solves this in a generic declarative manner. You declare your conrollers and supplies, you declare the dependencies between them, and Linux probes the right drivers depending on the "compatible" string. Having the device drivers is the hard part.

> It's why you can take a random x86 motherboard produced (eg) 5 years after Windows 7 (or RHEL6 or whatever) was released and expect it to not only boot but generally function properly, including tings like suspend/resume. You may need newer drivers for major external peripherals (eg the GPU) but the base system still _works_.

Because everything x86 firmware is targeted towards booting Windows without drivers. Linux on x86 emulates Windows bugs wrt ACPI implementation to allow for this. You are not going to get any of the newer CPU or platform sensor drivers running an older OS, because ACPI is not magic. Similarly, if your GPU is part of your SoC and not on an enumerable bus like PCIe, ACPI will not magically get you a GPU driver. The reason Intel iGPUs and AMD iGPUs work is because they're hanging off PCIe as well, and the driver for them is upstream.

> In the Arm world, this "can boot out of the box into a commodity OS" only exists for big iron servers

This is not true. You can do this on any platform, without ACPI, if said platform had its SoCs peripheral device drivers mainlined. I showcased this in my FOSDEM talk: https://fosdem.org/2026/schedule/event/KLFW73-no-line-lik...

> everything else is a bespoke crapshow that is only usable _after_ support for that specific system is added to mainline Linux.

This is not because they lack ACPI. If you rely on a vendor to have some proprietary ACPI firmware for you, and then you implement whatever bespoke ACPI interfaces they came up with for their SoCs, then you're still having to add platofrm-specific support to mainline kernels. Except now you're working around vendor firmware quirks. See the Qualcomm Snapdragon Elite situation: it has ACPI, but it's so vendor-specific and quirky, everyone who runs Linux on it just uses devicetrees instead.

x86 motherboard vendors don't need to do this because most of their device drivers are on enumerable buses like PCIe, and the non-enumerable stuff doesn't really change for them because they don't have tightly integrated MMIO register based hardware video decoders, or PWM outputs, or 2D blitters, or whatever else; if they do have it then it's handled by a chipset that hangs off, you guessed it: PCIe.

> (as opposed to a special snowflake vendor build that more often than not never gets a single update. Or even source code. Please, ask me how I know this)

You seem to be under the impression that I don't know what I'm talking about. I get paid to upstream SoC support to mainline Linux. If whatever vendor you bought hardware from had used ACPI in their platform firmware instead, you'd still be in the same situation.

ACPI is not a silver bullet, unless you are specifically Windows or pretending to be Windows, on a platform that Windows runs on.

ACPI

Posted Feb 25, 2026 18:53 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (13 responses)

> It is not. If you turn on an x86 PC, there is a whole bunch of stuff that happens before any bootloader or kernel ever gets to run code; that's all software. This includes whatever implements ACPI.

ACPI is implemented as a virtual machine that the kernel runs on request. So that's how vendors can supply custom quirks for their hardware without going through the upstream kernel. You can peruse it here: drivers/acpia/acpica (like psopcode.c).

It allows weird quirks like "this hardware needs a 500us pause after this register write, because otherwise the power spike overwhelms the buffer capacitors and everything hard-locks".

> You can run ACPI on anything, it's just that it didn't come from the factory with a certain ACPI description preloaded, so you'd end up making your own, at which point you're better off using DT, as that's an open standard developed on an open mailing list.

Device trees ended up being messier than ACPI.

ACPI

Posted Feb 25, 2026 19:19 UTC (Wed) by fratti-co (subscriber, #175548) [Link] (11 responses)

I do know of the ACPI bytecode virtual machine, and do not think the ACPI bytecode virtual machine is a point in favour of ACPI. In DT, stuff like waiting for a certain timespan after turning on a regulator for power to stabilise is handled declaratively. Having a bytecode VM helps you if your quirk affects an existing standardised device but is not a generic quirk that can be expressed declaratively. It does not do anything for supporting new hardware not described by anything or implemented by any driver.

And by "helps you" I mean it saves you from submitting 3 entire patches: one to add a vendor prefixed quirk property to the DT binding, one to add it to the device trees it should be in, and one to implement it in the driver.

> Device trees ended up being messier than ACPI.

This is an opinion that I disagree with.

ACPI

Posted Feb 25, 2026 21:22 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

> And by "helps you" I mean it saves you from submitting 3 entire patches: one to add a vendor prefixed quirk property to the DT binding, one to add it to the device trees it should be in, and one to implement it in the driver.

Sigh. And that's EXACTLY why DTs are inferior to ACPI.

You lose the moment you say "just submit 3 patches". This automatically means that if I submit my patches for a new device _today_, I'll see them supported in the currently most popular installed Linux distros maybe in 2 years. Unless somebody on the mailing list gets offended and vetoes my patch. Or just ignores it altogether.

And yes, in the ideal world all generic devices should behave identically, and there should be no individual quirks. But we don't live in the ideal world.

ACPI vs DT

Posted Feb 26, 2026 21:42 UTC (Thu) by DemiMarie (subscriber, #164188) [Link] (6 responses)

Even using ACPI instead of DT doesn’t mean the drivers will be upstreamed. What use is an embedded device without support for the hardware it is embedded in? I expect that vendors of desktops and laptops intended to be sold with Linux will upstream their patches, as needing a downstream kernel would make their product less attractive in the market.

Qualcomm Snapdragon laptops use ACPI, but it requires a big Windows-specific binary blob to have good power management, so it doesn’t help Linux at all.

What makes x86_64 work out of the box is not only ACPI + UEFI, but also having a basic set of devices that everyone implements, including the legacy VGA graphics mode.

Intel and AMD upstream support for their hardware before the hardware is even generally available. I’m not aware of any Arm64 vendors that do, though I would love to be proven wrong.

I also wonder if one of the difficulties is due to device vendors not having push access. Intel and AMD can unilaterally commit code without needing review from anyone else. I don’t think any of the Arm64 vendors have this privilege, so they need to wait for code review. This makes submitting patches upstream riskier, as there’s no guarantee they will be accepted.

ACPI vs DT

Posted Feb 26, 2026 22:14 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Even using ACPI instead of DT doesn’t mean the drivers will be upstreamed.

Not my point. ACPI is needed to make already _upstreamed_ devices work together. It allows vendors to have custom quirks for their hardware without going through the kernel submission process. This is especially important for things like power state management that often require model-specific quirks. Such as enabling/disabling external voltage regulators, tweaking some bus settings, or coupling power states together.

Yes, in theory DT allows to do this declaratively. You're supposed to just declare your system topology and then the kernel will do everything automatically. But again, see about the difference between theory and practice.

And yeah, it's definitely not a guarantee especially on ARM, where people are used to vendors not giving a fuck about long-term platform portability. But we know that this is at least possible, from the state of the x86-based world. Or for ARM-based servers.

> What makes x86_64 work out of the box is not only ACPI + UEFI, but also having a basic set of devices that everyone implements, including the legacy VGA graphics mode.

Which was often implemented in BIOS in 16-bit x86 code. So X.org X-server used to have an x86 emulator to be able to use it.

ACPI vs DT

Posted Feb 27, 2026 9:14 UTC (Fri) by farnz (subscriber, #17727) [Link] (3 responses)

If drivers aren't upstreamed, ACPI and DT are equivalent. If all the work is done upstream-first, then DT is better than ACPI.

Where ACPI wins is when the chip work is done upstream-first, but the board work is not upstreamed. If the board I'm working on now shows up as unreliable at FCT, and needs timing hacks (like "wait N milliseconds after entering D0 state") we are "supposed" to do the following:

  1. Change the DTs we've put into the board to have a vendor quirk name on the DT node.
  2. Change the kernel we're putting into the board to do the board-specific hack, and otherwise do what the normal node for that device does.
  3. Make sure the quirky DT goes in with the quirky kernel.
  4. Either wait until we've upstreamed our kernel change, or change our DT if needed to match the upstreamed changes, and have a way to deploy a new DT to boards already shipped, along with a kernel that matches upstream.

The temptation here, though, is to skip all the steps but step 2, and to change step 2 to "our kernel does a quirky thing on the normal node for that device", which only requires one change, not two, but makes it impossible to have one kernel that works on our device and on other boards with different quirks around that node.

ACPI's bytecode does an end-run around that; if we were using ACPI, the change would be "put ACPI bytecode in to tell the kernel about the board-specific hacks", and that would be enough.

Basically, DT has a "pit of failure" that's easy to fall into if you're shipping under time pressure - ACPI bytecode is a "pit of success" in the same circumstances.

ACPI vs DT

Posted Feb 27, 2026 11:28 UTC (Fri) by bluca (subscriber, #118303) [Link]

> Basically, DT has a "pit of failure" that's easy to fall into if you're shipping under time pressure - ACPI bytecode is a "pit of success" in the same circumstances.

Which covers like, 99.999% of all commercial enterprises ever, everywhere from the beginning until the end of time...

ACPI vs DT

Posted Feb 27, 2026 13:21 UTC (Fri) by pizza (subscriber, #46) [Link] (1 responses)

> If drivers aren't upstreamed, ACPI and DT are equivalent.

That really should be ..."are, at worst, equivalent"

> Where ACPI wins is when the chip work is done upstream-first, but the board work is not upstreamed.

Where ACPI _really_ wins is if combined with a "standard" boot flow that allows a generic/universal operating system image to be used, ala UEFI.

> makes it impossible to have one kernel that works on our device and on other boards with different quirks around that node.

Since your SoC+Board requires a bespoke boot flow and can only boot if supplied a custom pre-installed system image, there's no downside to putting a bespoke kernel on there.

>we are "supposed" to do the following:

You left out:

5) Getting your changes upstream leads to reworks that result in necessary changes to [all] existing DTs

Which puts *everyone else* in a situation where a generic kernel update breaks their hardware because they're using the DT embedded into/generated by the vendor-supplied bootloader instead of the one that's bundled with Linux itself.

> ACPI bytecode is a "pit of success" in the same circumstances.

Unless you pull a Qualcomm and half-ass your ACPI layer such that it calls back into board-specific OS "drivers". Granted, Microsoft shares in the blame for that one. The AArch64 server world shows that this can be done properly, but I guess that's just too haaaaard for under-resourced organizations like Qualcomm and MSFT.

ACPI vs DT

Posted Feb 27, 2026 14:01 UTC (Fri) by farnz (subscriber, #17727) [Link]

There are downsides for other people if we require a non-standard kernel - not only do they have to deal with "so, how do I replace farnz's custom pre-installed system image?", they also have to deal with "what, exactly is quirky about farnz's kernel as compared to upstream sources?". Not a problem for us, but a problem for any future hacker who wants to repurpose our board.

And Qualcomm's ACPI layer is effectively "we like DT, but Microsoft insist on ACPI - let's do what we'd do in DT with ACPI". It's a bunch of declarative nodes that tell the board-specific drivers from Qualcomm how this particular board is laid out, so that the drivers can do the right thing, and is what they'd do with downstream DT patches, too.

Arm64 early upstream support

Posted Feb 27, 2026 9:50 UTC (Fri) by geert (subscriber, #98403) [Link]

> I’m not aware of any Arm64 vendors that do, though I would love to be proven wrong.

Oh, it does happen. But we can't always make a big fuzz about it, especially if the initial support ends up in a released kernel version before the SoC is officially announced ;-)
https://society.oftrolls.com/@geert/111764672939425498

Submitting three patches

Posted Feb 26, 2026 10:31 UTC (Thu) by farnz (subscriber, #17727) [Link] (2 responses)

And by "helps you" I mean it saves you from submitting 3 entire patches: one to add a vendor prefixed quirk property to the DT binding, one to add it to the device trees it should be in, and one to implement it in the driver.

Say I'm designing a device based on an off-the-shelf TI chip. Volume production has started, and I'm doing the last software changes that can go into the factory version before sale; I discover that I've made a mistake in my hardware design that means that, while it's reliable with the bench supplies we've been using to date, it'll be unreliable with the in-box PSU, and I need a quirk to make it reliable.

How do I get the 3 patches into the kernel that I'm using, that was released to me 6 months ago? Or should I just have a private downstream patch that I never bother submitting upstream, to change the "standard" property to include my quirk? I don't have time - I'm due to release shortly - to submit a patch upstream, and I don't care about whether my DTs on my device are incompatible with the upstream kernel, since I'm not using that - I have my own fork of it. Oh, and management want the software finalized ASAP, because I'm the last blocker before we can go on sale.

This is the problem with "submit a patch"; when people are in a hurry and under pressure from management, they'll do the minimum necessary work to get it working. You then have the problem that you have devices in the field with a "bad" DT, since it claims the standard property, but I've patched the kernel to understand that the unprefixed property is actually quirky.

Submitting three patches

Posted Feb 26, 2026 11:44 UTC (Thu) by pizza (subscriber, #46) [Link] (1 responses)

> How do I get the 3 patches into the kernel that I'm using, that was released to me 6 months ago? Or should I just have a private downstream patch that I never bother submitting upstream, to change the "standard" property to include my quirk?

That's the dirty secret -- Take away "standard" x86 PCs, and approximately zero folks are running/shipping unmodified upstream kernels on their systems. Even in the world of standard PCs, only folks on fairly bleeding edge distros are on the mainline. The vast, vast, vast majority of devices out there are running a frankenkernel forked at some point in the past that may or may not be based on an LTS release still getting critical fixes. Additionally, once you step outside the world of x86 PCs you're lucky to get even partial sources to the running kernel+drivers, which means you can't even attempt to support yourself and port those "3 patches" to the mainline.

Submitting three patches

Posted Feb 26, 2026 12:37 UTC (Thu) by amacater (subscriber, #790) [Link]

See also discussions on user mailing lists - I'm using an ARM board and it works on the supplied kernel and nothing else ...

At least one Linux distribution (Armbian) does a heroic job taking manufacturers supplied kernels - usually with no documentation - and making a Debian userland work on top of it. Try to update it and you're lost without trace.

That's distinct from, say, trying to get Debian themselves to accept kernel patches and forward them upstream where you might have a little more luck, but only a little.

ACPI

Posted Feb 25, 2026 21:40 UTC (Wed) by pizza (subscriber, #46) [Link]

> Device trees ended up being messier than ACPI.

That's a massive understatement.

Sure, the vendor "supplies a devicetree and Linux JustWorks(tm)" is the goal, and it's *wonderful* when it does -- except that in practice, a given devicetree is closely coupled to a given kernel+bootloader, and the one supplied with the vendor-mangled kernel+bootloader [1] doesn't actually _work_ with what eventually lands in mainline...a couple of years after the fact.

(BTW, I realize that JustWorks(tm) nature of PCs isn't just due to ACPI -- The (equally opaque) vendor-supplied BIOS [+UEFI] provides a standard boot flow despite wildly varying hardware configurations)

At this point I have... five different AArch64 boards/systems within immediate reach. No two of them share the same boot flow, and only one of them (a RPi3B+) is fully[-enough] supported by mainline Linux[+uboot] [2] or has an upstream distribution-supported installation method (as opposed to a vendor-supplied pre-installed distro image of some sort), despite the newest board of that set coming onto the market more than 3 years ago.

This sort of crap is why I gave up and replaced all but one (The RPi) of those bailing-wire-and-hope Arm systems with cheap low-end AMD A9-9400 NUCs (which cost me _less_ than contemporary RPis+PSUs+enclosures would have).

[1] Invariably a partial-source (if not entirely source-less) fork of a kernel that was EOL'd two years before the vendor release with a large pile of SoC+board-specific changes in drivers (if not entire subsystems) beyond the devicetree itself
[2] Notably excluding GPUs and the like

ACPI

Posted Feb 26, 2026 22:34 UTC (Thu) by csamuel (✭ supporter ✭, #2624) [Link]

> Why are you people so obsessed with ACPI?

I don't know about others but I'm interested in ARMs MPAM support which is currently working its way into the kernel, and that will only be discovered by their patches if your system is using ACPI.

All the best,
Chris

Mainline kernel

Posted Feb 24, 2026 21:50 UTC (Tue) by jmalcolm (subscriber, #8876) [Link]

> the overwhelming norm is that the hardware vendor also provides the (sole) operating system option

The very catalyst for my comment is that SpacemiT is adding support for their hardware in the mainline kernel before the hardware has even shipped. Ubuntu has announced full support for the K3 in Ubuntu 26.04. Tenstorrent is doing the same. Deep Computing has given numerous talks on the importance of mainline support. UltraRISC says "Planned Upstream to Linux Mainline (2026 Q4)".

On RISC-V, I expect this to be "the norm" and hopefully "the overwhelming norm".

You will not be going to the hardware vendor for "the (sole) operating system option".

RVA23

Posted Feb 24, 2026 20:43 UTC (Tue) by jmalcolm (subscriber, #8876) [Link] (14 responses)

What makes RISC-V special is not the specific hardware you are running on today. It is the ability to take the entire existing software ecosystem and move it wherever you want. Or, to reverse that, it is the inability for anybody to take that software ecosystem away from you.

For x86-64, we have only AMD and Intel and both of those are US based.

It was not long ago that only ARM was designing ARM cores. Even now, there are only a handful of players doing so.

There are already more companies designing RISC-V chips than there are companies designing ARM cores. New RISC-V companies will emerge. Some RISC-V companies will not make it. But the ecosystem will continue to thrive. This is new.

It is not about the performance of any given chip. It is the performance that this ecosystem will deliver. And if there is demand for features like ACPI and UEFI, somebody in the RISC-V world will be providing them. We already see this:
https://milkv.io/titan

Again though, the value of RISC-V is not performance or price or features (though competition tends to deliver all of these). It is control. You can select the RISC-V provider that suits you best. You can build your own RISC-V chip if you want. India can build their own. Europe can build their own. Or they can source one from somebody geopolitically compatible with them. That kind of security and control is only available on RISC-V.

And RISC-V companies can truly innovate. You do not have to have a strategy compatible with ARM's strategy (or Intel's). There is no ARM suing Qualcomm in the RISC-V world. And the RISC-V extension model means you can experiment in silicon while running out-of-the-box software. So you can tailor your solutions to markets and use cases. It is why you see RISC-V already starting to make waves in AI.

In terms of competition, ARM is better than it used to be. Qualcomm X Elite is providing some competition to Apple Silicon for example though it is not "completely" compatible. I am not sure if it would be possible to create an X Elite Hackintosh. Regardless, this is not going to provide the same benefit that will come when there are 3-5 credible RISC-V providers where there is always somebody "emerging" to push the envelope.

And there will be a "fully open" RISC-V ecosystem in the way you crave at some point. It is inevitable. You already see fully open designs coming out of the Damo Academy. Universities and countries will create designs and open them. The Red Hat Linux of hardware will emerge at some point. These may or may not offer bleeding edge performance. But again, you have the freedom to choose what you want. It will not be long before some of these cores will be fully open and totally good enough for many of us.

In the meantime, there is limited supplier power. Today, a RISC-V vendor commands market share by having the best implementation. If they do not keep up, the market moves. If they act against the best interest of their customers, the market moves. You get these benefits even without "open" cores. We will have the same kind of portability we get with Linux distros. There is very little "lock in" forcing you to use any given distro. You are only tied to "Linux".

Along with an open ISA will likely come the ability to manufacture silicon in small batches economically. These are the kinds of things that fall into place as the other pieces become available. The most important piece is the software ecosystem. And we have that now in RVA23.

RVA23

Posted Feb 24, 2026 22:07 UTC (Tue) by pizza (subscriber, #46) [Link] (13 responses)

>And RISC-V companies can truly innovate. You do not have to have a strategy compatible with ARM's strategy (or Intel's). There is no ARM suing Qualcomm in the RISC-V world.

If you think RISC-V vendors won't try to sue each other once the stakes are high enough, or if you think RISC-V vendors won't ship/support bespoke GPL-violating Linux kernels if they think it will help achieve a competitive advantage (or simply out of convenience/indifference) you are laughably naive.

If I come off as cynical here it's because I have seen SoCs that differ only in that they ripped out an Arm core/cluster and replaced it with the RISC-V equivalent. Sure, it's "Linux" but the drivers are the same proprietary blobs as before, locking you into their platform.

(The *primary* interest in RISC-V from existing players isn't "freedom to innovate", it's that with sufficient volume, you come out ahead designing your own core versus licensing an existing Arm core. Of course, with recent geopolitical asshattery there's "national security" considerations too, but if that's what motivates you there already were several existing fully open ISAs)

> In terms of competition, ARM is better than it used to be. Qualcomm X Elite is providing some competition to Apple Silicon for example though it is not "completely" compatible.

At the CPU instruction level, yes, it is "compatible". In every other respect, they are light-years apart. There is literally zero shared peripheral (or low-level system firmware, or, or, or) between those SoCs.

> I am not sure if it would be possible to create an X Elite Hackintosh.

If you write MacOS drivers for every peripheral in the X-Elite, including emulating all of the stuff that doesn't exist (or simply functions differently -- notably this includes GPU and NPUs which use apple-specific APIs so you can't just port an existing driver) then sure, a hacintosh is "possible". I just don't know why anyone would bother.

> Along with an open ISA will likely come the ability to manufacture silicon in small batches economically.

Uh... that is entirely orthorgonal. Silicon production cost scales with the process node and the size of your chip; fabs don't care if said chip contains arm cores or risc-v cores or a sea of random logic or a bunch of microscopic emojis.

Also, just because *existing* RISC-V extensions are "open" doesn't mean that all possible future extensions will be. And it definitely doesn't guarantee that any of these "freedoms" will trickle down to end-users. What practical difference does it make if your locked-down smartphone is built on a RISC-V core versus an Arm core?

RVA23

Posted Feb 24, 2026 22:59 UTC (Tue) by dskoll (subscriber, #1630) [Link] (9 responses)

Yeah, I don't see the openness of RISC-V as really impacting the freedom of software users all that much. The ones who will really benefit are embedded designers, who will be able to throw a RISC-V core into their FPGA design without having to pay any royalties.

For products that sell high volumes at low prices and need an embedded CPU, RISC-V can be a huge boon.

RVA23

Posted Feb 25, 2026 13:32 UTC (Wed) by pizza (subscriber, #46) [Link] (8 responses)

> Yeah, I don't see the openness of RISC-V as really impacting the freedom of software users all that much. The ones who will really benefit are embedded designers, who will be able to throw a RISC-V core into their FPGA design without having to pay any royalties.

Absolutely -- it's an additional arrow in the quiver of folks that produce chips. Especially if you have the desire/need to tweak the instruction set slightly, and have the product volume to justify the added NRE. But at the "system/board" level (to say nothing of the end-user) it's at best a wash.

RISC-V won't magically make core/chip vendors or hardware OEMs get together to voluntarily commoditize their own products in the market any more than Arm did. Or Power/PPC did, or M68K did, or even Z80 and 6502 did. -- On all of them, you have the same ISAs (or even physical processors) but radically different hardware and/or software ecosystems built around them.

Everyone is trying to differentiate themselves and build a moat. Every time there's been user-benefitting commitization it's been driven by a single dominant player looking to capture the value for themselves (eg Microsoft with DOS (and the Flight Simulator compatibility test, heh) then Windows, or Google with Android, and arguably RedHat with Linux). Voluntarily trade associations can work but only if backed up with a big stick (eg USB, Bluetooth, Wifi etc have trademarks and patents that are only licensed if you pay up and/or meet certain testing/interop criteria). Otherwise you end with the likes of Linaro which IMO was ultimately a failure with respect to taming the Arm ecosystem -- Nobody other than Arm put any real effort/funding into it while the SoC vendors continued to incestuously consolidate and build moats around their "competitive advantages" rather than voluntarily commoditize their own product offerings.

(And it doesn't matter how many unique core vendors there are when they are all funnelled through the same fabs -- to paraphrase the quote from _Armageddon_, "American Components, Russian Components... All made by TSMC!" Geology is actually a greater risk to the semiconductor supply chain than geopolitics, even today...)

RVA23

Posted Feb 25, 2026 18:32 UTC (Wed) by ejr (subscriber, #51652) [Link] (7 responses)

As one of the founders of Chisel told me once that no one wins a race to the bottom, so why not blow out the bottom?

Yeah, RISC-V, etc. are primarily about hardware and esp. *hardware design tools*. People can argue about ISA aspects forever and ever (practically requiring micro-op re-fusing is a biggie...).

When RISC-V was just starting up, students were churning out new designs *with silicon* on a 9-ish month cadence (DARPA PERFECT). The chip packagers couldn't keep up... I had students elsewhere who could learn Chisel well enough to turn out simulation-correct encryption engine designs within two semesters (not counting the really time-consuming pieces like tests, validation, etc.).

That's without massive licensing costs for the big tool vendors. The latter have had to re-evaluate some of their value propositions given more accessible HDLs, widely usable and quality tools (e.g. Yosys-based flows), and "cheaper" fabs. No, you can't match the best performance / energy general-purpose chips, but being able to produce smaller volumes with designs specialized to the customer task while staying economically feasible is massive.

RVA23

Posted Feb 25, 2026 18:59 UTC (Wed) by pizza (subscriber, #46) [Link] (6 responses)

> As one of the founders of Chisel told me once that no one wins a race to the bottom, so why not blow out the bottom?

That attitude is what's created the pervasive BigTech-driven product enshittification cycle, where "data about the user" is the actual product and everything else (software _and_ hardware alike) is provided at or below cost to enable said data collection.

> That's without massive licensing costs for the big tool vendors. The latter have had to re-evaluate some of their value propositions given more accessible HDLs, widely usable and quality tools (e.g. Yosys-based flows), and "cheaper" fabs. No, you can't match the best performance / energy general-purpose chips, but being able to produce smaller volumes with designs specialized to the customer task while staying economically feasible is massive.

Yes, "cheaper" is highly relative. And of course, every incremental reduction in total costs helps improve accessibility, but simple RISC-V cores being royalty-free is just one (very small) prong of that.

RVA23

Posted Feb 25, 2026 19:58 UTC (Wed) by ejr (subscriber, #51652) [Link] (5 responses)

Building physical chips is different than the cycle you mention. His observation was that the cost of producing commodity HW was racing to the bottom, so let's just level the field and enable producing specialized designs more easily. Blowing away the CPU IP expense is part of leveling the playing field. Nothing to do with what you (rightly, imho) decry.

It actually is somewhat an opposite... An idea was to leverage less-expensive commodity pieces from elsewhere while keeping an advantage through specialization (value-add). At the rock-bottom of cost, there's little to no room for spy gizmos in the commodity pieces. Or at least not while you have competitors producing things more cheaply without the gizmos. The nation-state subsidies can't last very long with that pressure. Whether that would have worked or not no longer matters.

RVA23

Posted Feb 25, 2026 22:12 UTC (Wed) by pizza (subscriber, #46) [Link] (4 responses)

> At the rock-bottom of cost, there's little to no room for spy gizmos in the commodity pieces. Or at least not while you have competitors producing things more cheaply without the gizmos

There is no special "spy gizmo" added; the hardware already has the necessary capabilities because that's what's needed for its official purporse, but at the rock-bottom (and everything is racing down there) the hardware is subsidized by folks that supply software to turn said hardware *into* a spy gizmo. (Witness pretty much every consumer electronic device currently for sale, the lower the sticker price the worse the amount of ad/spy-ware present)

eg a "smart speaker" needs to have an always-on microphone in order for you to give it instructions and hooks into various external online services to perform to what you asked of it. The difference between a "good" and "spy gizmo" smart speaker is the latter CC's what you say or do to third parties. That CCing is invariably performed by the backend service provider, not the speaker itself.

(Granted, a malicious actor could upload software into the speaker to make it directly stream everything to wherever it wants. But that's still just re-using the existing necessary-for-the-device's-primary-function hardware.)

RVA23

Posted Feb 25, 2026 23:35 UTC (Wed) by ejr (subscriber, #51652) [Link] (3 responses)

We're talking across each other. I intend to mean that the actual chips don't have the margin for extra. The general ones have become non-customer-facing commodities, or at least that was part of the push. Didn't quite work out that way for many reasons outside the technical.

You seem to mean the full systems which certainly do have that margin. I don't have any always-on listening devices, etc. *Maybe* my phone, but I don't have anything intentionally running that listens. I'm willing to make that trade-off for that device.

I'm not saying that becoming purely a commodity is great. The impact of being a commodity can hurt the suppliers severely. See the fun times in coffee. The specialty market didn't exist until relatively recently, and the impact of commoditization on farmers was immense. See also mega-farmed corn, labor markets in agriculture, ... And actually the DRAM market until the recent explosion of "specialty" DRAM (HBM variants). It's a tricky thicket.

(Also, the lowest-power wake word detectors don't have the energy budget to record anything. Some don't even spend the energy needed to convert to digital... Analog lives! ;) )

RVA23

Posted Feb 25, 2026 23:49 UTC (Wed) by pizza (subscriber, #46) [Link] (1 responses)

> We're talking across each other. I intend to mean that the actual chips don't have the margin for extra.

I wouldn't be so sure about that -- especially with modern processes, the analog components -- notably including the I/O pads -- put a lower limit on how small you can shrink a chip, and it is quite common to end up with unused (or underutiilized) dead space that you can't optimize away. So you may as well stuff some additional functionality in there.

(Personally, I think this sort of hardware-level malware is pretty far down the list of dangers one has to worry about)

RVA23

Posted Feb 26, 2026 6:36 UTC (Thu) by malmedal (subscriber, #56172) [Link]

> Personally, I think this sort of hardware-level malware is pretty far down the list of dangers one has to worry about

Also detection would be practically impossible, the fab will insert secret circuits for entirely legitimate reasons, e.g. at startup a circuit will wait until it has verified that power is stable before it allows the customer logic to run. Any IP blocks, such as SRAM will have a self-test and self-repair going on.

RVA23

Posted Feb 26, 2026 17:20 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

We're talking across each other. I intend to mean that the actual chips don't have the margin for extra.

Except there's value from being able to spy, which can potentially subsidize the added cost of including spy circuits. That could be a business that wants to spy for advertising or a nation state that wants to spy on its adversaries. We've already seen that complete devices that spy on their users can undercut the price of ones that can't because the spying adds enough value to the manufacturer to enable them to subsidize the price. Just compare the price of a spying-capable "smart" TV to a similar spec "dumb" monitor, for instance. With individual chips, it would more likely to nation states interested in inserting spying capabilities rather than the manufacturer, but experience says state security apparatuses can easily get manufacturers in their country to go along with this kind of thing.

RVA23

Posted Feb 25, 2026 20:13 UTC (Wed) by jmalcolm (subscriber, #8876) [Link] (2 responses)

@pizza

> RISC-V vendors won't try to sue each other once the stakes are high enough

Of course they will. But one competitor suing another over a specific implementation is quite different than an essential partner trying to block your entire business from entering the market (ARM tried to block Qualcomm using the ARM ISA at all). Night and day.

> ship/support bespoke GPL-violating Linux kernels

No doubt some will. I hope early support in the Linux mainline is seen as a competitive advantage instead. But we will see. This has nothing to do with my argument other than competiion giving end-users a chance to vote with their feet and wallets.

> interest in RISC-V from existing players

There is no sense in arguing back and forth but I think control is the big attraction even more than licensing cost. "National Security" is just one aspect of that. At the microcontroller level, it means choosing the extensions you want a la carte. At the application processor level, it means not needing approval from the ISA owner to build extenstions on top of RVA23 to support things your customers want (for AI, automotive, space, or whatever).

> If you write MacOS drivers for every peripheral in the X-Elite
> I just don't know why anyone would bother

On Intel, people did exactly this. And it worked because the x86-64 chips in both Macs and PCs were the same. But Apple Silicon and Snapdragon X Elite are not even compatible at the ISA level.

> fabs don't care

I am not talking about traditional fabs. My point here was that, if you do not need to worry about IP, other manufacturing methods can be developed. Not worth getting into here.

> In every other respect, they are light-years apart

I understand that RISC-V does not stop any given vendor from providing hardware that is undocumented, inaccessible, or completely locked down. This happens, it will continue to happen, and RISC-V does not prevent any individual supplier from doing it.

My point is merely that, on RISC-V, you have the power to take your software somewhere else. And others have the freedom to offer you a somewhere else to go to. That is it.

Every iPhone runs on Apple Silicon and and your iOS software requires this. Every Android phone runs on cores designed by ARM (the company). These companies or the countries they reside in can completely deny you access to the entire ecosystem. This is not hypothetical. It has happened. It cannot happen on RISC-V. Of course, my provider can also go out of business. This cannot realistically happen on RISC-V either because I will not be tethered to the commercial success of any single supplier (or even small group).

You are talking about the freedom to access the hardware you have. I am talking about the freedom to take your software somewhere else. If iOS or macOS ran on RISC-V, I could create a competing device for the software to run on and I could make that hardware as open as I want. That is not possible with Apple Silicon.

The openness of the PC ecosystem is not an accident. It is the result of competition. IBM tried to lock their platform down with the PS/2 series including changing the architecture and hardware in ways only they could support. The market demanded application portability and rejected IBM's attempt and open standards were created by their competitors. Instead of dominating the market, IBM eventually exited it. The ecosystem is not perfect. There is lots of wierd hardware in laptops for which there are no drivers for example. But it is fairly "open" in the way you describe. But there are still only two chip vendors. So the actual amount of openness is still pretty limited.

RISC-V brings competition down to the level of the chip designer, not just the board maker. This makes it possible to have a much more open ecosystem than even the PC. It does not guarantee it of course. But it becomes possible. And anybody can throw their hat in the ring to take the market there. If the buyers want it, it will come.

RVA23

Posted Feb 25, 2026 21:33 UTC (Wed) by pizza (subscriber, #46) [Link]

> Of course they will. But one competitor suing another over a specific implementation is quite different than an essential partner trying to block your entire business from entering the market (ARM tried to block Qualcomm using the ARM ISA at all). Night and day.

Arm v Qualcomm was a run-of-the-mill supply chain contract dispute. Granted, it got particularly nasty, but so what?

(I will add that Qualcomm is considered to be the Oracle of the semiconductor world, ie a legal department that happens to sell chips. They are well known for legal skullduggery, and I even have some personal experience to that effect. I will also add that this (and other) experience with Qualcomm is a large part of why Arm is now designing+selling their own chips instead of just licensing cores+IP)

> On Intel, people did exactly this. And it worked because the x86-64 chips in both Macs and PCs were the same

Not just the "x86-64 chips" but also the rest of the underlying platform (eg UEFI+ACPI) and (commodity) peripherals shared with the larger PC ecosystem. The commonality probably exceeded 98%.

> But Apple Silicon and Snapdragon X Elite are not even compatible at the ISA level.

So what if Arm and Qualcomm implement different extensions on top of the baseline ARMv9.0 ISA? Should they not be allowed to write software that wrings every ounce of potential performance out of their chips? BTW, the same argument applies to x86 (even different products from the same vendor) and *especially* RISC-V that places no restrictions on what you can or can't include in your design -- RV23 is analogous to the mandatory parts of (eg) ARMv9.0; there's nothing preventing vendors A and B from adding on additional extensions to their SoCs and producing binaries that won't work on the other. Or patenting said extensions and suing anyone else that implements them without a license.

> My point is merely that, on RISC-V, you have the power to take your software somewhere else. And others have the freedom to offer you a somewhere else to go to. That is it.

If it is "your software" you can trivially recompile it to target a different processor. If you can't "trivially recompile it" or are specifically talking about *binary compatibility* then it's not (just) the instruction set that's locking you in -- it's your vendor, ecosystem/platform/API, OS, and/or peripherals. (eg Modern x86-64 processors are still fully capable of natively running DOS+Windows 3.1, but that doesn't mean you can just execute a windows 3.1 binary that tries to talk to a parallel-port-attached peripheral directly on a laptop with a USB-attached parallel port running RHEL10)

(That's not to say you won't have possible performance regressions targeting a different processor family)

> These companies or the countries they reside in can completely deny you access to the entire ecosystem. This is not hypothetical. It has happened. It cannot happen on RISC-V.

Unless you happen to have a semiconductor fab in the back of your shop, you're still SOL should $controlling_company/country decide you're persona non-grata. That has also happened, and RISC-V doesn't change that risk one iota.

> You are talking about the freedom to access the hardware you have. I am talking about the freedom to take your software somewhere else. If iOS or macOS ran on RISC-V, I could create a competing device for the software to run on and I could make that hardware as open as I want. That is not possible with Apple Silicon.

If you tried that, you'd have to (a) recreate everything other than the CPU core -- including cloud services, and (b) still get sued out of existence by Apple anyway.

> At the application processor level, it means not needing approval from the ISA owner to build extenstions on top of RVA23 to support things your customers want (for AI, automotive, space, or whatever).

Again, this freedom applies to the chip vendor, not to the folks writing (or running) software on said chips. The moment your software relies on any extensions beyond the "explicitly free" ones (let's assume for now there are no patent trolls just biding their time to go after the RV23 baseline) you're back where you started, ie "locked in".

Look, RISC-V is *great* for folks building their own chips, especially when produced for internal use by vertically-integrated companies. However, that greatness will _not_trickle down the supply chain or to end-users [1] at best it is a wash with the current status quo, at worst the market will further fragment resulting in _less_ portability.

[1] absent a quasi-monolopistic player coming to dominate and imposing their standards on everyone else

RVA23

Posted Feb 26, 2026 10:16 UTC (Thu) by farnz (subscriber, #17727) [Link]

The openness of the PC ecosystem is not an accident. It is the result of competition. IBM tried to lock their platform down with the PS/2 series including changing the architecture and hardware in ways only they could support. The market demanded application portability and rejected IBM's attempt and open standards were created by their competitors. Instead of dominating the market, IBM eventually exited it. The ecosystem is not perfect. There is lots of wierd hardware in laptops for which there are no drivers for example. But it is fairly "open" in the way you describe. But there are still only two chip vendors. So the actual amount of openness is still pretty limited.

That competition is entirely an accident, and if you could time-travel back to 1981 and show IBM that clones of the 5150, 5160, and 5170 would come to dominate the market, it wouldn't have happened. IBM never intended clones of the IBM PC to exist; had they realised that they had no way to require royalties from clone makers, they would have designed the 5150's expansion bus differently, to ensure that you needed a licence from IBM to design a compatible planar. Instead, the planar's bus on the 5150 just brought Intel's bus signals out to slots for expansion cards, meaning that Intel were the company who could stop you cloning the PC, and not IBM. This was thought, by IBM, to be of no consequence, since they could take control again later if cloning started to happen, and in any case, you'd need to clone the PC BIOS. Indeed, IBM took legal action against early clone makers to stop them - the intent of the "open architecture" was not to enable cloning, but to enable a market for peripherals that IBM didn't control.

The PS/2 was merely an attempt to bring the PC back to where it was always "supposed" to be.

Space

Posted Feb 24, 2026 14:48 UTC (Tue) by willy (subscriber, #9762) [Link] (1 responses)

You missed ARM 2 in 1986 which was the basis of the Acorn Archimedes desktop computers.

Space

Posted Feb 24, 2026 21:01 UTC (Tue) by jmalcolm (subscriber, #8876) [Link]

I did not exactly "miss it". Not being from the UK, I have never seen an Archimedes or, in fact, any Acorn system.

However, they are cool machines and I recognize the history and the difference in regional experience. Certainly, there is no ARM64 without Acorn.

My first computer was a COLECO ADAM. The Z80 is not making my list. :-)

Space

Posted Feb 24, 2026 16:58 UTC (Tue) by Wol (subscriber, #4433) [Link]

> In my view, here is a list of CPUs that introduced new ISAs that changed the desktop computing landscape:

NatSemi 32032?

Istr someone here saying it was a buggy disaster, but the design itself was well thought through.

I can't remember whether it was the 32032, or the 68000, but one of those chips had a wonderful mechanism for saving the CPU registers when making a call or for whatever other reason - they were memory-backed into main ram, so the chip had a "register pointer" to the ram location. To back up the registers before a call, you simply changed the register pointer, I think by default you just changed it to the top of the stack, so when you returned from the call you simply set the register pointer back to the base of the stack when you were called, to restore the old CPU state.

Cheers,
Wol

6809 (was Space)

Posted Feb 24, 2026 21:28 UTC (Tue) by dskoll (subscriber, #1630) [Link]

My first computer around 1982 was a Radio Shack Color Computer (original model) which had a 6809 CPU. I'm sad that the 6809 didn't catch on compared to the 6502; the 6809 was the far superior architecture. It was an 8-bit processor, but was capable of running a multitasking OS (OS-9) that had some UNIX-like design aspects.

On the other hand, the 6809 was quite a bit more expensive than the 6502, so I guess that's why it never caught on. And it's a bit funny and sad that the original CoCo picked a high-end (for the time) CPU and cheaped out on every other aspect of the hardware design.

zram compressed backend

Posted Feb 24, 2026 15:03 UTC (Tue) by claudex (subscriber, #92510) [Link] (3 responses)

> The kernel's zram subsystem provides a compressed, in-memory block device that can optionally move data to a physical disk when memory fills up. Previously, the kernel would have to decompress the pages before writing them to the physical device. Now, page writeback can directly write zram-compressed data.

This seem a big win since this mean, in addition to not waiting the decompression before moving them, the data will be read and written faster. It will be a big advantage over zswap.

zram compressed backend

Posted Feb 25, 2026 7:06 UTC (Wed) by PeeWee (subscriber, #175777) [Link] (1 responses)

The kernel's zram subsystem provides a compressed, in-memory block device that can optionally move data to a physical disk when memory fills up. Previously, the kernel would have to decompress the pages before writing them to the physical device. Now, page writeback can directly write zram-compressed data.
This seem a big win since this mean, in addition to not waiting the decompression before moving them, the data will be read and written faster. It will be a big advantage over zswap.
I beg to differ, or Chris Down, rather. When you mention zswap in the same sentence it is clear that you (want to ab)use it as "compressed RAM", but since it's a swap device as far as the kernel is concerned, "writeback" has a totally different meaning than in the zswap context. Whereas the latter essentially adds a very fast tier in front of the actual non-volatile swap space, and can thus do its own LRU tracking, the former needs some additional care (see Chris Down's article, page 7 f.), from userspace, no less. That makes it unfit in real memory pressure situations, which can get you in trouble in milliseconds, and the next poll is only up in five minutes or so.

It also looks like zram can only be ordered to writeback idle and uncompressible pages, on demand, mind you. You cannot order it to flush a certain amount of memory to the backing store without selecting some page range(s) of which you have no information as to their age or place in a non-existent LRU list. Zswap, OTOH, holds all the cards and has all the information to make the best choice, which pages to write to the backing store - usual caveats of heuristics apply.

So no, zswap is still the only sane choice if writeback is desired. It integrates rather nicely with the whole memory management system, shortcomings, like decompression on writeback and preallocation of swap space, notwithstanding.

zram compressed backend

Posted Feb 25, 2026 8:29 UTC (Wed) by claudex (subscriber, #92510) [Link]

>It also looks like zram can only be ordered to writeback idle and uncompressible pages, on demand, mind you.

Yeah, I didn't see that limitation, that's not a good fit a general use case.

zram compressed backend

Posted Feb 25, 2026 7:43 UTC (Wed) by PeeWee (subscriber, #175777) [Link]

Plus, I really, really don't see any valid use case for zram writeback in any other scenario outside the abuse as swap device. Maybe I am too narrow-minded, but why would one choose to use a compressed block device in RAM, only to then have parts of its contents written to non-volatile storage? As I said in the other thread, it looks like the initially intended use case was a version of tmpfs with transparent compression. And, as I've just learned myself, tmpfs pages can be zswapped (nowadays).

There is also the problem of partial duplication, maybe. When used as the underlying block device of a real filesystem, pages read by some app accessing said file system need to be uncompressed, to be usable. What happens to the compressed page? Will it stay on the zram device as a (partial) duplicate, albeit at (hopefully) smaller size?

Compare that to tmpfs, whose natural habitat is the page cache. And when its contents get (z)swapped out, that page cache memory is reclaimed. On (z)swap-in, the reverse happens. I really do wonder who would use a zram device and put an actual filesystem on it, with all the hassle that entails? The entry hurdle to tmpfs is basically mount.

kmalloc_obj()

Posted Feb 24, 2026 15:34 UTC (Tue) by geert (subscriber, #98403) [Link] (1 responses)

While
    ptr = kmalloc(sizeof(struct some_obj_name), gfp);
can indeed (assuming the types match) be replaced by
    ptr = kmalloc_obj(*ptr, gfp);
most were converted to
    ptr = kmalloc_obj(struct some_obj_name, gfp);
instead, which was easier to automate.

kmalloc_obj()

Posted Feb 26, 2026 11:02 UTC (Thu) by maxfragg (subscriber, #122266) [Link]

might be a matter of preference, but I have to say I strongly prepare having the type written out explicitly here.
yes, using *ptr means you automagically get the right size, even if one changes the type of ptr, but for readability I really thing reading struct some_obj_name makes code easier to follow.


Copyright © 2026, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds