|
|
Log in / Subscribe / Register

My calendar must be wrong...

My calendar must be wrong...

Posted Feb 7, 2026 10:02 UTC (Sat) by Alterego (guest, #55989)
In reply to: My calendar must be wrong... by pschneider1968
Parent article: An in-kernel machine-learning library

+1

All LLM can have "hallucinations" and go terribly wrong.

Collect data ? why not.
Decide something in the kernel logic no, please. I want a perfectly known worst case scenario.


to post comments

Not all ML is LLMs

Posted Feb 7, 2026 13:12 UTC (Sat) by danielkza (subscriber, #66161) [Link] (1 responses)

Also being an LLM skeptic, I understand many have immediate negative reactions to the thought of ML being integrated into software, specially the kernel, but reasoning about the use case is key for accurate criticism.

- Not all ML applications are LLMs. It doesn't seem language processing is at all related to the posted patch. The data to be collected and used to train models is not textual, and inference will be used to produce tuning parameters or make algorithmc decisions;
- Collecting system data to train models for optimization is already part of the toolbox of hyperscalers. They do it with proprietary kernel patches, data pipelines, training infrastructure, etc;
- The benefits of workload-aware performance and efficiency tuning can be significant, but are out of reach for most users and companies;
- Upstream functionality in the kernel would be a first step in fomenting an open ecosystem that can be broadly useful.

That said, I have not reviewed the patch and am not endorsing it. It's entirely possible the kernel community decides they do not concur with the approach, but it's important to discuss it with clarity.

Not all ML is LLMs

Posted Feb 9, 2026 12:39 UTC (Mon) by simlo (guest, #10866) [Link]

All MLs basicly fit a complex function with many parameters to some data, and tries to extrapolate to new data. It goes wrong once in a while. You have to design your the consumers to handle that somehow, usually by checking the result against another source.

My calendar must be wrong...

Posted Feb 7, 2026 14:54 UTC (Sat) by kleptog (subscriber, #1183) [Link] (1 responses)

Why are you talking about LLMs? This patch is about Machine Learning, doesn't mention LLMs at all.

There are many useful ML models, though I also don't directly see any useful places. Currently the OOM killer and the scheduler have all sorts of heuristics, perhaps an ML model could do better?

The patch seems to mostly cover the interaction of a model running in user space acting on data collected by the kernel. Interesting idea, but without a concrete use case ....

My calendar must be wrong...

Posted Feb 12, 2026 9:34 UTC (Thu) by anton (subscriber, #25547) [Link]

For the OOM killer I don't see where the feedback (or ground truth) that ML learns from would come from in sufficient quantity.

For e.g., scheduling decisions I can imagine some feedback mechanisms, but they probably need to be weighted with information about what's important for each thread (as discussed recently).

Anyway, for stuff where heuristics are at work anyway, ML looks appropriate. The only problem is that one has even more problems explaining any failures than with hand-coded heuristics.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds