|
|
Subscribe / Log in / New account

An LLM is...

An LLM is...

Posted Jun 26, 2025 22:41 UTC (Thu) by Wol (subscriber, #4433)
In reply to: An LLM is... by rsidd
Parent article: Supporting kernel development with large language models

> For writing code, again, verify (as the speaker here did) before using. But the concerns about both energy consumption and IP rights are very real. That said, one has to calculate the energy tradeoff of programmers taking minutes to write a small function, vs LLMs taking milliseconds to do the same thing. (OK, programmers will have to take seconds or a couple minutes to tell the LLM what they want. Not sure how the math works at scale.)

Well, as a very AI sceptic, I've just had an interesting experience today.

One of my colleagues (a planner who doesn't really know programming) wrote an Excel tool to do job, with a whole bunch of VBA. Cue the usual mis-understanding between the pro programmer who didn't understand what he was trying to do, and the end user who was rubbish at explaining what was required. Fortunately a quick water-cooler chat with a senior Planner and the lightbulb went on.

He'd used an AI to write the code, and it was eight pages of well documented code, so maybe 50% comments, 50% code. But obviously didn't follow our style.

So I took what he'd done, and redid it. My way of course, and probably ended up with 25% comment, 75% code. And only two pages!

So my reaction would be that a half-decent programmer should be able to outperform an AI pretty easily BUT! The fact that an end user could easily write a working proof of concept was brilliant - I was given a working demo of what was required. And the AI used a couple of features I didn't know/understand, so it taught me something. (I also looked at a load of stuff it was doing and thought "why the **** are you doing THAT! :-)

Cheers,
Wol


to post comments

An LLM is...

Posted Jun 27, 2025 9:17 UTC (Fri) by farnz (subscriber, #17727) [Link]

Also note that one of the great things that comes out when you have a program that meets the user's needs (even if it's otherwise awful - insecure, unreadable, prone to crashing off the happy path etc) is that you can write a fuzz tester to compare the two programs and tell you about differences.

If the differences are acceptable (e.g. user program crashes, yours succeeds), you can ignore them; if they're not (user program outputs a different value to yours), you can turn it into a test case, confirm that it's reasonable with the user (and not a bug in their program), and then fix this failing test in your program.

There's even people out there experimenting with using LLMs to fuzz the differences between a reimplementation and an original program.

But the key power here is having an unreliable oracle (the user's LLM-aided attempt at a program) that you can use to quickly answer questions about what the user "really" wants. That allows you to use your human intelligence to build up a reasonable set of questions to ask the user, using the oracle to answer the dumb questions.

An LLM is...

Posted Jun 29, 2025 13:01 UTC (Sun) by geert (subscriber, #98403) [Link]

So it looks a lot like comparing driver code in a vendor tree with driver code that has ended up in the Linux kernel?

I remember the (buggy) WiFi driver in my first Android tablet: it was three (IIRC) times as large as the driver that ended up upstream.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds