|
|
Subscribe / Log in / New account

What about trust?

What about trust?

Posted Jun 26, 2025 21:23 UTC (Thu) by cengizIO (subscriber, #106222)
Parent article: Supporting kernel development with large language models

‘Signed-off-by’ implies both authorship and responsibility. Yes, he claims responsibility, but what about authorship? Where did the AI model originally copy that code from? Copying an LLM generated response and tagging it as one’s own is certainly not the right direction for an ecosystem built solely on earned trust.


to post comments

What about trust?

Posted Jun 26, 2025 21:39 UTC (Thu) by bluca (subscriber, #118303) [Link] (4 responses)

An LLM is not a database, it doesn't "copy" it from anywhere. It's explained how it works in the article itself.

What about trust?

Posted Jun 27, 2025 10:46 UTC (Fri) by paulj (subscriber, #341) [Link] (3 responses)

An LLM is literally a database of fine-grained features found in a data-set. A quite sophisticated one that captures the relationships between features and their likelyhood, over varying and overlapping spans of features.

The LLM weights are a function of and a transformation of the data that was fed in. No more, no less.

What about trust?

Posted Jun 27, 2025 10:49 UTC (Fri) by paulj (subscriber, #341) [Link] (2 responses)

Oh, and at inference time you are combining that "trained" data-base with your own context window (a fraction the size of the data-set), to have the LLM extend that context window to produce something that 'follows' from the 2.

What about trust?

Posted Jun 27, 2025 11:34 UTC (Fri) by bluca (subscriber, #118303) [Link] (1 responses)

As I said, it's not copying from a database, which was implied by the post I answered to

What about trust?

Posted Jun 30, 2025 22:40 UTC (Mon) by koh (subscriber, #101482) [Link]

There was no mentioning of "database" in the post you replied to. You invented that notion. As with nearly all your comments, they are in a subtle but quite fundamental way: wrong. And yes, LLMs do copy, too, see e.g. Q_rsqrt. It's called overfitting in the corresponding terminology.

What about trust?

Posted Jun 26, 2025 21:45 UTC (Thu) by laurent.pinchart (subscriber, #71290) [Link]

> ‘Signed-off-by’ implies both authorship and responsibility.

That is not quite exact. Quoting Documentation/process/submitting-patches.rst,

> The sign-off is a simple line at the end of the explanation for the
> patch, which certifies that you wrote it or otherwise have the right to
> pass it on as an open-source patch. The rules are pretty simple: if you
> can certify the below:
>
> Developer's Certificate of Origin 1.1
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> By making a contribution to this project, I certify that:
>
> (a) The contribution was created in whole or in part by me and I
> have the right to submit it under the open source license
> indicated in the file; or
>
> (b) The contribution is based upon previous work that, to the best
> of my knowledge, is covered under an appropriate open source
> license and I have the right under that license to submit that
> work with modifications, whether created in whole or in part
> by me, under the same open source license (unless I am
> permitted to submit under a different license), as indicated
> in the file; or
>
> (c) The contribution was provided directly to me by some other
> person who certified (a), (b) or (c) and I have not modified
> it.
>
> (d) I understand and agree that this project and the contribution
> are public and that a record of the contribution (including all
> personal information I submit with it, including my sign-off) is
> maintained indefinitely and may be redistributed consistent with
> this project or the open source license(s) involved.

Whether or not a patch generated by an LLM qualifies for (a) or (b) is being debated, but authorship is not required to add a Signed-off-by line.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds