LWN: Comments on "Creating a healthy kernel subsystem community" https://lwn.net/Articles/1036908/ This is a special feed containing comments posted to the individual LWN article titled "Creating a healthy kernel subsystem community". en-us Tue, 11 Nov 2025 19:27:44 +0000 Tue, 11 Nov 2025 19:27:44 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Such an important article, and not just for kernel development https://lwn.net/Articles/1038893/ https://lwn.net/Articles/1038893/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; And, to wit, dismissing the idea that gently reminding people to be considerate to others is beneficial, using acerbic sarcasm as the main rhetorical tool, may not be the most effective argument. It rather undermines itself, in fact.</span><br> <p> ...As the saying goes, the fish rots from the head down.<br> <p> <p> </div> Sat, 20 Sep 2025 11:32:27 +0000 Such an important article, and not just for kernel development https://lwn.net/Articles/1038887/ https://lwn.net/Articles/1038887/ ssmith32 <div class="FormattedComment"> <span class="QuotedText">&gt;Being nice to people has rewards! Who knew?</span><br> <p> Um. There certainly have been a few incidents in the kernel community where it does seem like people didn't believe that to be true. Entire screeds have been written that basically boil down to "being a jerk is actually good". <br> <p> And, to wit, dismissing the idea that gently reminding people to be considerate to others is beneficial, using acerbic sarcasm as the main rhetorical tool, may not be the most effective argument. It rather undermines itself, in fact.<br> </div> Sat, 20 Sep 2025 07:32:56 +0000 Linux model prone to burnout? https://lwn.net/Articles/1038261/ https://lwn.net/Articles/1038261/ neilbrown <div class="FormattedComment"> <span class="QuotedText">&gt; it seems to me that the Linux model with the hierarchy of individuals </span><br> <p> We are more and more a hierarchy of teams..<br> <p> <span class="QuotedText">&gt; of individuals responsible for ...</span><br> <p> I think it is important to note that those individuals (or teams) take on that responsibility themselves, they don't have it imposed upon them by others. So they are free to relinquish it when it becomes a burden, either for a time or permanently.<br> I think it would be interesting to explore the dynamics of why they don't or when they do.<br> </div> Mon, 15 Sep 2025 21:03:05 +0000 Thank you LWN https://lwn.net/Articles/1038259/ https://lwn.net/Articles/1038259/ shalem <div class="FormattedComment"> Jake, thank you for the great article on my talk.<br> </div> Mon, 15 Sep 2025 19:46:53 +0000 Such an inportant article, and not just for kernel development https://lwn.net/Articles/1038257/ https://lwn.net/Articles/1038257/ ma4ris8 <div class="FormattedComment"> I like about the article too.<br> Just noticed it. After short quick check,<br> there are many good recommendations<br> for improving co-operation.<br> <p> </div> Mon, 15 Sep 2025 19:02:12 +0000 Such an important article, and not just for kernel development https://lwn.net/Articles/1038234/ https://lwn.net/Articles/1038234/ koverstreet <div class="FormattedComment"> <span class="QuotedText">&gt; (burnout is not something I've experienced)</span><br> <p> But you spend all your time just bouncing from one cool project that only benefits the community to another!<br> <p> It must be such an idyllic life :)<br> </div> Mon, 15 Sep 2025 14:54:43 +0000 Linux model prone to burnout? https://lwn.net/Articles/1038122/ https://lwn.net/Articles/1038122/ koverstreet <div class="FormattedComment"> You get burnout when you've been stuck doing things that have to be done for too long, to the point where you have no freedom for yourself in your day to day life. <br> <p> The cause can be insufficient resources, in the more hobbyist/non corporate world. Or, in the corporate world, where it's just a job and I see people the most burned out - poor time management skills. <br> <p> In the corporate world, there's chronically too many managers running around and too many meetings - leading to "when everything is a priority, nothing is" syndrome. <br> <p> For me, I'm not feeling burned out - and I think that's in large part because I jealously and religiously protect my autonomy. And I've the other things that I've found help the most are the things no manager is going to ask you to do: spend time cleaning up the codebase, both for your own future sanity and to make easier for other people to contribute. And spending time on the community; getting people involved, teaching them what you do - it means there's more people to talk to about the cool stuff you're doing, and then those people start to do cool stuff all on their own.<br> </div> Mon, 15 Sep 2025 14:32:51 +0000 AI review? https://lwn.net/Articles/1038095/ https://lwn.net/Articles/1038095/ iabervon <div class="FormattedComment"> An LLM will have a lot of success at writing a document that says the most statistically unlikely text in your code is the problem. If you've got a bug, and you've been overlooking it for a while, that document is probably actually accurate, because the issue is that you've been reading the code as if it was the predictable, correct thing, and the actual code is not that.<br> <p> On the other hand, that document wouldn't be good code review for a patch that is correct, or one where the problem isn't a disagreement between the actual code and what people (or the model) expects the code to be when looking at it.<br> </div> Mon, 15 Sep 2025 12:09:05 +0000 AI review? https://lwn.net/Articles/1038086/ https://lwn.net/Articles/1038086/ mb <div class="FormattedComment"> Well, in reality though, these models do find bugs and help review or debug code.<br> <p> I'm not going to argue whether an AI model "understands" anything or not, because we'd first have to define what "understanding" means.<br> <p> I'm neither saying that AI models find all problems nor that their findings are always correct.<br> But they can find problems that are non-trivial and non-local effects. I did use AI models to do exactly that successfully multiple times.<br> <p> I can give you an example of a multithreading bug that I was hunting for two weeks:<br> <a href="https://github.com/mbuesch/httun/commit/d801db03c8677d4eb562d0aa8d364f10b973a849#diff-8388cbdbffd5ebc8b0b9480435e7f0cd7e0dad97731b163391fdf9ed904abe9eL530">https://github.com/mbuesch/httun/commit/d801db03c8677d4eb...</a><br> <p> In line 530 the wrong task abort handle is cloned which leads to very rarely hanging and sluggish network communication on the user level because the the task is not aborted and the other tasks still talk to the old task with outdated state.<br> This problem is covered up by the other layers in the system and the peer across the network having restart and retry logic.<br> Due to that it was not obvious at all in what part of the whole application stack the problem was.<br> <p> What I did is give the source code to Gemini and describe what behavior I was seeing and what I already had discovered during my two weeks of debugging. (Basically that I suspected the problem to be in the client part and that I suspected the task's state data to be outdated.)<br> Gemini responded literally that there was a copy and paste problem in line 530.<br> My head still hurts from banging it into the wall after reading this very first sentence.<br> <p> It went on to describe in an extremely detailled and correct way how that c&amp;p problem does prevent the task from being aborted and how that would lead to old state being preserved and so on.<br> <p> So, at this point I don't actually care whether Gemini "understands" my code or my explanations as long as it gives me correct results.<br> Would I eventually have found this bug without AI? Probably yes. Would I have been much faster and would I have less grey hair now if I would have asked Gemini earlier in the process. Totally absolutely yes!<br> <p> Gemini found the problem, correctly explained it in a lengthy text and provided a correct fix for it by fixing the c&amp;p typo (I decided to fix it differently). Therefore, AI is a tool that I like to use and it improves the quality of my code. I don't see why there would be anything wrong with that. Most of the time today's AI is unable to help me with debugging and code review. However if it only helps one in 20 times, it's absolutely worth it.<br> </div> Mon, 15 Sep 2025 10:02:03 +0000 AI review? https://lwn.net/Articles/1038083/ https://lwn.net/Articles/1038083/ viro <div class="FormattedComment"> Generic AI in present state is unable to understand, period. It can produce a text statistically indistinguishable from actual review of something vaguely similar to what it's been given. Which, fair enough, does pass the Turing Test - there are human beings who operate on that level; the problem is that reviews from such human beings are worse than useless.<br> <p> The difference between broken and correct can be subtle and highly non-local; what's more, you need the trainers capable of doing the analysis themselves *and* of giving a usable feedback to trainees, whoever or whatever those trainees might be. It's hard to do, it takes serious time and considering the amount of training needed for AI models, employing enough of such trainers would cost way too fucking much.<br> </div> Mon, 15 Sep 2025 08:24:12 +0000 Linux model prone to burnout? https://lwn.net/Articles/1038082/ https://lwn.net/Articles/1038082/ taladar <div class="FormattedComment"> Thinking about that whole bus factor issue, it seems to me that the Linux model with the hierarchy of individuals responsible for the entire thing, a whole subsystem, a whole part of a subsystem,... is inherently prone to problems like burnout since everything relies on individuals instead of teams that can take the load off any given individual depending on their need to recover for a while.<br> </div> Mon, 15 Sep 2025 07:56:15 +0000 Such an important article, and not just for kernel development https://lwn.net/Articles/1038076/ https://lwn.net/Articles/1038076/ neilbrown <div class="FormattedComment"> Is it such an important article though?<br> Isn't it just someone saying "life is hard and this is how I deal with it"?<br> Certainly there is value in that but we will all have different experiences and different struggles and different values and so will often need different strategies.<br> I find this sort of talk contains a mixture of "not relevant to me" (burnout is not something I've experienced) and "isn't that obvious?" (Being nice to people has rewards! Who knew?).<br> That doesn't mean there is no value in the talk - it can be helpful to find that others have similar experiences and find similar solutions - but I don't see it the way you seem to.<br> And do you really think it will change anyone's behavior? I'm reminded of a lightbulb joke:<br> Q: how many social workers does it take to change a light bulb?<br> A: only one but the light bulb must WANT to change.<br> <p> </div> Mon, 15 Sep 2025 00:12:51 +0000 AI review? https://lwn.net/Articles/1038077/ https://lwn.net/Articles/1038077/ mathstuf <div class="FormattedComment"> What I think would be far more useful is an LLM-as-LSP setup where, instead of conforming to a chat interface, I can have an LLM "looking over my shoulder" and adding annotations directly to my editor as I'm working. If it then feeds suggested changes as LSP code actions, I don't have to pause any editing for the LLM to crunch its numbers so that its suggestions don't conflict. Even if the suggestions are just comments like "this codepath should be tested", I can later prompt it to try and create a skeleton test case for it.<br> <p> But I really don't know how to do this in a token-efficient manner when forced into a chat-shaped API. Do you send the file and some kind of patch sequences on a timer so you're not tossing the entire context at it for each edit? Pare it down to `ed` commands and send that as the patch sequence?<br> <p> At least to me, it sounds far more interesting than an LLM puking out hunks of unreviewed code at me at least (as someone who cares about long-term maintenance at least).<br> </div> Mon, 15 Sep 2025 00:10:26 +0000 AI review? https://lwn.net/Articles/1038072/ https://lwn.net/Articles/1038072/ shemminger <div class="FormattedComment"> I wished AI patch review worked. It doesn't now.<br> You would think it would be a natural for being able to digest large volume text sources like LKML and subsystem mailing lists. But in the few times I have tried the existing tools, the results were wildly disappointing. Full of vague stuff (more error checking is needed but where) and missing common failure patterns. My guess is that generic AI in present state is unable to understand what is important. There are static checkers but those are really just better versions of lint.<br> </div> Sun, 14 Sep 2025 22:22:06 +0000 Such an inportant article, and not just for kernel development https://lwn.net/Articles/1038068/ https://lwn.net/Articles/1038068/ sdalley <div class="FormattedComment"> I'm sometimes a bit surprised that great key articles like this one often attract so few comments.<br> <p> Maybe it's because, after reading it, all one can do is shake one's head and say, well, the man's right!<br> <p> Software guys are, much more than average, often compulsive hyperactive-obsessive-detail-oriented-perfectionist types. Great virtues so long as they don't start taking over the show.<br> <p> Hopefully this article saves some of us from becoming short-fuse tunnel-visioned hot-tempered jerks or burning out completely. Life is far more than programming.<br> <p> "Mens sana in corpore sano" as the Romans used to say. I've found it works in both directions.<br> </div> Sun, 14 Sep 2025 20:53:26 +0000