Not all ML is LLMs
Not all ML is LLMs
Posted Feb 7, 2026 13:12 UTC (Sat) by danielkza (subscriber, #66161)In reply to: My calendar must be wrong... by Alterego
Parent article: An in-kernel machine-learning library
- Not all ML applications are LLMs. It doesn't seem language processing is at all related to the posted patch. The data to be collected and used to train models is not textual, and inference will be used to produce tuning parameters or make algorithmc decisions;
- Collecting system data to train models for optimization is already part of the toolbox of hyperscalers. They do it with proprietary kernel patches, data pipelines, training infrastructure, etc;
- The benefits of workload-aware performance and efficiency tuning can be significant, but are out of reach for most users and companies;
- Upstream functionality in the kernel would be a first step in fomenting an open ecosystem that can be broadly useful.
That said, I have not reviewed the patch and am not endorsing it. It's entirely possible the kernel community decides they do not concur with the approach, but it's important to discuss it with clarity.
