|
|
Subscribe / Log in / New account

What about trust?

What about trust?

Posted Jun 27, 2025 10:49 UTC (Fri) by paulj (subscriber, #341)
In reply to: What about trust? by paulj
Parent article: Supporting kernel development with large language models

Oh, and at inference time you are combining that "trained" data-base with your own context window (a fraction the size of the data-set), to have the LLM extend that context window to produce something that 'follows' from the 2.


to post comments

What about trust?

Posted Jun 27, 2025 11:34 UTC (Fri) by bluca (subscriber, #118303) [Link] (1 responses)

As I said, it's not copying from a database, which was implied by the post I answered to

What about trust?

Posted Jun 30, 2025 22:40 UTC (Mon) by koh (subscriber, #101482) [Link]

There was no mentioning of "database" in the post you replied to. You invented that notion. As with nearly all your comments, they are in a subtle but quite fundamental way: wrong. And yes, LLMs do copy, too, see e.g. Q_rsqrt. It's called overfitting in the corresponding terminology.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds