Lower bar to start kernel development?
Lower bar to start kernel development?
Posted Aug 7, 2025 22:57 UTC (Thu) by SLi (subscriber, #53131)Parent article: On the use of LLM assistants for kernel development
What would be the likelihood of a first submitter submitting a non-low-quality patch without an LLM?
This seems potentially both good and bad. I'd argue: *If* this lowers the bar and makes more people eventually graduate into "real" kernel development, that's not purely negative.
But sure, there's something qualitatively new about this; it's not just kids sending code as a Word document.
Actually, I would predict that we will outgrow this problem in a way that will hugely annoy some and make others decide it's not a problem. LLMs are still improving fast. Sure, there's always people who wouldn't see value in them and would claim they are no good even if they outperformed humans.
I think we will reach in not distant future a point where the LLMs will do well enough that most of their work will be thought to be made by a competent hyena. (I mean human, but I love that autocorrect.)
I think this would mean more pragmatically LLMs growing to the level where they are at least ok at kernel development, but also pretty good at knowing what they are not good at.
But if you want to ease maintainer burden, maybe make an LLM review patches where LLM contributed (I personally find it silly to say that an LLM "authored" something, just like I don't say code completion authored something). And then forward them to an LLM maintainer, who asks TorvaLLM to pull them. And have them argue if there should be a disclosure if unreliable humans touched the patch.
Posted Aug 8, 2025 6:24 UTC (Fri)
by gf2p8affineqb (subscriber, #124723)
[Link] (5 responses)
Posted Aug 8, 2025 7:43 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (3 responses)
At the end of the day, most of the stuff on the net is rubbish. The quality of what an LLM outputs is directly correlated to the quality that goes in (it must be, without human review and feedback, it has no clue). Therefor, most LLM output has to be rubbish, too.
If your AI is based on a SMALL Language Model, where the stuff fed in has been checked for accuracy, then the results should be pretty decent. I don't use AI at all (as far as I know, the AI search engine slop generally has me going "what aren't you thinking !!!"), but my work now has a little AI that has access to all our help docs and thus does a decent job for most people - except that as always, people don't think, and people keep getting referred to Guru docs for more detail - HINT roughly 1/3 of the company doesn't have access to Guru, as a matter of policy!!! Argh!!!
Cheers,
Posted Aug 8, 2025 9:04 UTC (Fri)
by jepsis (subscriber, #130218)
[Link] (2 responses)
Here are some examples of useful prompts:
Is the naming of functions and variables consistent in this subsystem?
Are the comments sufficient, or should they be added to or improved?
If I were to submit this upstream, what aspects might attract nitpicking?
Does the commit message accurately reflect the commit, or are there any gaps?
Posted Aug 8, 2025 9:32 UTC (Fri)
by khim (subscriber, #9252)
[Link] (1 responses)
That's not “ first-time user working with LLVM” (LLM, I assume?). That's “experienced kernel developer trying LLM”. First time user request would be more of “here's the spec for that hardware that I have, write driver to it”. And then the resulting mess is sent to maintainer, warts, bugs and all.
Posted Aug 8, 2025 9:43 UTC (Fri)
by jepsis (subscriber, #130218)
[Link]
Sure. Good example. It would have been good to have that sentence checked by AI, as it would likely have corrected it.
Posted Aug 8, 2025 12:33 UTC (Fri)
by kleptog (subscriber, #1183)
[Link]
The step after that would be an LLM iterating over an AST so it doesn't have to worry about getting the syntax right, but I haven't read about that yet. It's not clear to me if that technology even exists yet.
Posted Aug 8, 2025 8:43 UTC (Fri)
by khim (subscriber, #9252)
[Link] (1 responses)
LLMs do precisely the opposite: they make first ever patch look better than your average patch, but they make it harder for a newcomer to “eventually graduate into "real" kernel development”. That's precisely the issue with current AI: degradation of output. LLMs don't have a world model and when you try to “teach” them they start performing worse and worse. To compensate their makers feed them terabytes, then petabytes of himan-produced data… but that well is almost exhausted, there are simply no data to feed into these. And this scaling only improves the initial output, it does nothing to the lack of the world model and ability to learn during the dialogue. Worse: as we know than when ape and human interact human turns into ape, not the other way around. The chances are high that story with LLMs would be the same: when complete novices would try to use LLMs to “become a kernel developers” they would become more and more accepting to LLM flaws instead of learning to fix them. This, too, would increase load placed on maintainers. Yes and no. They are feed more and more data, which improves the initial response, but does nothing to gradual degradation of output when you try to improve it. Sooner or later you hit the “model collapse” threshold and then you have to start from scratch. So far that haven't worked at all. LLMs are all too happy to generate nonsense output instead of admitting that they don't know how to do something. Given the fact that LLMs tend to collapse when feed their own input (that's why even most expensive plans don't give you the ability to generate long outputs, instead they give you the ability to request many short ones) – this would make the situation worse, not better.
Posted Aug 8, 2025 15:46 UTC (Fri)
by laurent.pinchart (subscriber, #71290)
[Link]
An interesting study on that topic: "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (https://arxiv.org/abs/2506.08872)
Lower bar to start kernel development?
Lower bar to start kernel development?
Wol
I don’t see any issue with a first-time user working with LLVM.Lower bar to start kernel development?
Lower bar to start kernel development?
That's not “ first-time user working with LLVM” (LLM, I assume?). That's “experienced kernel developer trying LLM”.Lower bar to start kernel development?
Lower bar to start kernel development?
> I'd argue: *If* this lowers the bar and makes more people eventually graduate into "real" kernel development, that's not purely negative.
Lower bar to start kernel development?
Lower bar to start kernel development?
