|
|
Log in / Subscribe / Register

Not all ML is LLMs

Not all ML is LLMs

Posted Feb 7, 2026 13:12 UTC (Sat) by danielkza (subscriber, #66161)
In reply to: My calendar must be wrong... by Alterego
Parent article: An in-kernel machine-learning library

Also being an LLM skeptic, I understand many have immediate negative reactions to the thought of ML being integrated into software, specially the kernel, but reasoning about the use case is key for accurate criticism.

- Not all ML applications are LLMs. It doesn't seem language processing is at all related to the posted patch. The data to be collected and used to train models is not textual, and inference will be used to produce tuning parameters or make algorithmc decisions;
- Collecting system data to train models for optimization is already part of the toolbox of hyperscalers. They do it with proprietary kernel patches, data pipelines, training infrastructure, etc;
- The benefits of workload-aware performance and efficiency tuning can be significant, but are out of reach for most users and companies;
- Upstream functionality in the kernel would be a first step in fomenting an open ecosystem that can be broadly useful.

That said, I have not reviewed the patch and am not endorsing it. It's entirely possible the kernel community decides they do not concur with the approach, but it's important to discuss it with clarity.


to post comments

Not all ML is LLMs

Posted Feb 9, 2026 12:39 UTC (Mon) by simlo (guest, #10866) [Link]

All MLs basicly fit a complex function with many parameters to some data, and tries to extrapolate to new data. It goes wrong once in a while. You have to design your the consumers to handle that somehow, usually by checking the result against another source.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds