|
|
Subscribe / Log in / New account

AI review?

AI review?

Posted Sep 15, 2025 10:02 UTC (Mon) by mb (subscriber, #50428)
In reply to: AI review? by viro
Parent article: Creating a healthy kernel subsystem community

Well, in reality though, these models do find bugs and help review or debug code.

I'm not going to argue whether an AI model "understands" anything or not, because we'd first have to define what "understanding" means.

I'm neither saying that AI models find all problems nor that their findings are always correct.
But they can find problems that are non-trivial and non-local effects. I did use AI models to do exactly that successfully multiple times.

I can give you an example of a multithreading bug that I was hunting for two weeks:
https://github.com/mbuesch/httun/commit/d801db03c8677d4eb...

In line 530 the wrong task abort handle is cloned which leads to very rarely hanging and sluggish network communication on the user level because the the task is not aborted and the other tasks still talk to the old task with outdated state.
This problem is covered up by the other layers in the system and the peer across the network having restart and retry logic.
Due to that it was not obvious at all in what part of the whole application stack the problem was.

What I did is give the source code to Gemini and describe what behavior I was seeing and what I already had discovered during my two weeks of debugging. (Basically that I suspected the problem to be in the client part and that I suspected the task's state data to be outdated.)
Gemini responded literally that there was a copy and paste problem in line 530.
My head still hurts from banging it into the wall after reading this very first sentence.

It went on to describe in an extremely detailled and correct way how that c&p problem does prevent the task from being aborted and how that would lead to old state being preserved and so on.

So, at this point I don't actually care whether Gemini "understands" my code or my explanations as long as it gives me correct results.
Would I eventually have found this bug without AI? Probably yes. Would I have been much faster and would I have less grey hair now if I would have asked Gemini earlier in the process. Totally absolutely yes!

Gemini found the problem, correctly explained it in a lengthy text and provided a correct fix for it by fixing the c&p typo (I decided to fix it differently). Therefore, AI is a tool that I like to use and it improves the quality of my code. I don't see why there would be anything wrong with that. Most of the time today's AI is unable to help me with debugging and code review. However if it only helps one in 20 times, it's absolutely worth it.


to post comments

AI review?

Posted Sep 15, 2025 12:09 UTC (Mon) by iabervon (subscriber, #722) [Link]

An LLM will have a lot of success at writing a document that says the most statistically unlikely text in your code is the problem. If you've got a bug, and you've been overlooking it for a while, that document is probably actually accurate, because the issue is that you've been reading the code as if it was the predictable, correct thing, and the actual code is not that.

On the other hand, that document wouldn't be good code review for a patch that is correct, or one where the problem isn't a disagreement between the actual code and what people (or the model) expects the code to be when looking at it.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds