AI review?
AI review?
Posted Sep 15, 2025 8:24 UTC (Mon) by viro (subscriber, #7872)In reply to: AI review? by shemminger
Parent article: Creating a healthy kernel subsystem community
The difference between broken and correct can be subtle and highly non-local; what's more, you need the trainers capable of doing the analysis themselves *and* of giving a usable feedback to trainees, whoever or whatever those trainees might be. It's hard to do, it takes serious time and considering the amount of training needed for AI models, employing enough of such trainers would cost way too fucking much.
Posted Sep 15, 2025 10:02 UTC (Mon)
by mb (subscriber, #50428)
[Link] (1 responses)
I'm not going to argue whether an AI model "understands" anything or not, because we'd first have to define what "understanding" means.
I'm neither saying that AI models find all problems nor that their findings are always correct.
I can give you an example of a multithreading bug that I was hunting for two weeks:
In line 530 the wrong task abort handle is cloned which leads to very rarely hanging and sluggish network communication on the user level because the the task is not aborted and the other tasks still talk to the old task with outdated state.
What I did is give the source code to Gemini and describe what behavior I was seeing and what I already had discovered during my two weeks of debugging. (Basically that I suspected the problem to be in the client part and that I suspected the task's state data to be outdated.)
It went on to describe in an extremely detailled and correct way how that c&p problem does prevent the task from being aborted and how that would lead to old state being preserved and so on.
So, at this point I don't actually care whether Gemini "understands" my code or my explanations as long as it gives me correct results.
Gemini found the problem, correctly explained it in a lengthy text and provided a correct fix for it by fixing the c&p typo (I decided to fix it differently). Therefore, AI is a tool that I like to use and it improves the quality of my code. I don't see why there would be anything wrong with that. Most of the time today's AI is unable to help me with debugging and code review. However if it only helps one in 20 times, it's absolutely worth it.
Posted Sep 15, 2025 12:09 UTC (Mon)
by iabervon (subscriber, #722)
[Link]
On the other hand, that document wouldn't be good code review for a patch that is correct, or one where the problem isn't a disagreement between the actual code and what people (or the model) expects the code to be when looking at it.
AI review?
But they can find problems that are non-trivial and non-local effects. I did use AI models to do exactly that successfully multiple times.
https://github.com/mbuesch/httun/commit/d801db03c8677d4eb...
This problem is covered up by the other layers in the system and the peer across the network having restart and retry logic.
Due to that it was not obvious at all in what part of the whole application stack the problem was.
Gemini responded literally that there was a copy and paste problem in line 530.
My head still hurts from banging it into the wall after reading this very first sentence.
Would I eventually have found this bug without AI? Probably yes. Would I have been much faster and would I have less grey hair now if I would have asked Gemini earlier in the process. Totally absolutely yes!
AI review?
