An LLM is...
An LLM is...
Posted Jun 26, 2025 15:38 UTC (Thu) by dskoll (subscriber, #1630)Parent article: Supporting kernel development with large language models
An LLM [...] is really just a pattern-matching engine with a large number of parameters.
Yes, LLMs are pattern-matching engines that require unsustainably-high amounts of energy to train and whose enormous proliferation is likely to have severe environmental impacts, quite apart from the other societal effects.
Posted Jun 26, 2025 16:00 UTC (Thu)
by rsidd (subscriber, #2582)
[Link] (3 responses)
For doing academic research, I generally say, use LLMs like wikipedia: verify primary sources before using the answer.
For writing text, I generally say don't. But I can see that it can be a help for those who are not fluent in English.
For writing code, again, verify (as the speaker here did) before using. But the concerns about both energy consumption and IP rights are very real. That said, one has to calculate the energy tradeoff of programmers taking minutes to write a small function, vs LLMs taking milliseconds to do the same thing. (OK, programmers will have to take seconds or a couple minutes to tell the LLM what they want. Not sure how the math works at scale.)
Anyway, these are early days and in a couple years time maybe one can do many such tasks locally on one's laptop, and there may even be a saving of energy per unit of productivity. Plus AI may enable new eco-friendly energy resources and reduce our CO2 emissions (I genuinely think this is a real possibility). But I think the advice of "verify before using" will remain valid.
Posted Jun 26, 2025 22:41 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (2 responses)
Well, as a very AI sceptic, I've just had an interesting experience today.
One of my colleagues (a planner who doesn't really know programming) wrote an Excel tool to do job, with a whole bunch of VBA. Cue the usual mis-understanding between the pro programmer who didn't understand what he was trying to do, and the end user who was rubbish at explaining what was required. Fortunately a quick water-cooler chat with a senior Planner and the lightbulb went on.
He'd used an AI to write the code, and it was eight pages of well documented code, so maybe 50% comments, 50% code. But obviously didn't follow our style.
So I took what he'd done, and redid it. My way of course, and probably ended up with 25% comment, 75% code. And only two pages!
So my reaction would be that a half-decent programmer should be able to outperform an AI pretty easily BUT! The fact that an end user could easily write a working proof of concept was brilliant - I was given a working demo of what was required. And the AI used a couple of features I didn't know/understand, so it taught me something. (I also looked at a load of stuff it was doing and thought "why the **** are you doing THAT! :-)
Cheers,
Posted Jun 27, 2025 9:17 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
If the differences are acceptable (e.g. user program crashes, yours succeeds), you can ignore them; if they're not (user program outputs a different value to yours), you can turn it into a test case, confirm that it's reasonable with the user (and not a bug in their program), and then fix this failing test in your program.
There's even people out there experimenting with using LLMs to fuzz the differences between a reimplementation and an original program.
But the key power here is having an unreliable oracle (the user's LLM-aided attempt at a program) that you can use to quickly answer questions about what the user "really" wants. That allows you to use your human intelligence to build up a reasonable set of questions to ask the user, using the oracle to answer the dumb questions.
Posted Jun 29, 2025 13:01 UTC (Sun)
by geert (subscriber, #98403)
[Link]
I remember the (buggy) WiFi driver in my first Android tablet: it was three (IIRC) times as large as the driver that ended up upstream.
Posted Jun 28, 2025 1:51 UTC (Sat)
by gmatht (guest, #58961)
[Link]
Posted Jul 3, 2025 1:55 UTC (Thu)
by mbligh (subscriber, #7720)
[Link] (1 responses)
Posted Jul 3, 2025 1:57 UTC (Thu)
by dskoll (subscriber, #1630)
[Link]
Humans can have other redeeming attributes not shared by LLMs. We also don't need to make a new type of human when we can barely figure out how to deal with the old type.
An LLM is...
An LLM is...
Wol
Also note that one of the great things that comes out when you have a program that meets the user's needs (even if it's otherwise awful - insecure, unreadable, prone to crashing off the happy path etc) is that you can write a fuzz tester to compare the two programs and tell you about differences.
An LLM is...
An LLM is...
An LLM doesn't *need* vast power.
An LLM is...
An LLM is...