|
|
Subscribe / Log in / New account

Everything is not a nail...

Everything is not a nail...

Posted Oct 26, 2024 1:40 UTC (Sat) by chuckwolber (subscriber, #138181)
Parent article: OSI readies controversial Open AI definition

It was relatively easy to create a definition of openness that applies cleanly to deterministic systems like compiled software. It seems like we are trying to apply the same definition to a non-deterministic system (neural nets) and we keep failing to notice the real problem.

If I listen to music that ultimately influences my style as a musician, I am no less free when I play in my heavily influenced style and the music that influenced me is no less encumbered by copyright. The trained neural net (biological or digital) is its own qualia, which exists independent of the influences that trained it and it owes nothing to those influences.

Today those qualia are small, so it is easy to dismiss their subjectivity. Scale that qualia up closer to the complexity of a human mind and the problem should be more clear.

By way of analogy - demanding a full accounting of the training material to satisfy an openness requirement is like demanding that you provide a full accounting of everything you were exposed to since birth before we allow you to operate freely in open society. The very idea is absurd.

We invented a shortcut to that problem a long time ago - it is called a "social contract". We cannot possibly know everything that is going on in your head, but we can set forth expectations and apply relevant penalties.

I propose we rethink the OSAI in the same way.


to post comments

Everything is not a nail...

Posted Oct 26, 2024 5:12 UTC (Sat) by shironeko (subscriber, #159952) [Link]

I hope you are not letting others to modify and redistribute your brain. By all definition, your mind is proprietary to you.

Everything is not a nail...

Posted Oct 26, 2024 13:11 UTC (Sat) by Wol (subscriber, #4433) [Link] (1 responses)

> Today those qualia are small, so it is easy to dismiss their subjectivity. Scale that qualia up closer to the complexity of a human mind and the problem should be more clear.

And herein lies the entire problem with AI, imho.

The qualia (whatever that means) of the human brain is much LESS than we give it credit for. But it is QUALITATIVELY DIFFERENT from what we call AI and LLM and that junk. AI and LLMs are bayesian inference machines. Human brains have external forces imposing a (not necessarily correct) definition of "right" and "wrong".

When a human brain messes up, its bayesian machine is likely to get eaten by a lion, or gored by a wildebeest, or whatever. When a computer bayesian machine messes up, it's too expensive to retrain.

I've mentioned the crab before, but if a 6502-powered robot crab can cope easily with the complexities of the surf zone, why can't a million-pound AI tell the difference between a car and a tank ... (I suspect it's because the human brain, and maybe that crab?, had a whole bunch of specialist units which probably was much larger than the central bayesian machine - we're going down the wrong route ... again ...)

Cheers,
Wol

Everything is not a nail...

Posted Oct 26, 2024 17:31 UTC (Sat) by khim (subscriber, #9252) [Link]

> When a human brain messes up, its bayesian machine is likely to get eaten by a lion, or gored by a wildebeest, or whatever.

That's extremely rare occurance.

> When a computer bayesian machine messes up, it's too expensive to retrain.

That could be true for LLMs, today, but many smaller-scaled AI models are retrained from scratch rountinely.

Pretty soon there would be more LLMs retrained from scratch than100 or 200 billions of people that ever lived on this planet.

Would that mean that LLMs would achieve parity with humans, when that would happen? Hard to predict… my gut feeling is that no, that wouldn't happen – but not because encounters with lions or wildebeests made humans are that much better, but because nature invented a lot of tricks over billions of years that we don't know how to replicate… yet.

> I've mentioned the crab before, but if a 6502-powered robot crab can cope easily with the complexities of the surf zone, why can't a million-pound AI tell the difference between a car and a tank

It could. In fact today AI already does smaller number of mistakes than human on such tasks.

Yes, AI does different mistakes, but average precision is better.

> I suspect it's because the human brain, and maybe that crab?, had a whole bunch of specialist units which probably was much larger than the central bayesian machine - we're going down the wrong route ... again ...

We just tend to dismiss the issues with “optical illusions” and exaggerate AI mistakes. To feel better about themselves, maybe?

When you are shown picture where the exact same color looks “white” in one place and “black” in the other place you don't even need AI to reveal the difference, simple sheet of paper is enough – but that's not perceived as human brain deficiency because we all are built the same and all fall prey to the exact same illusion.

AIs are built differently and they are fooled by different illusions than what humans misperceive – and that difference is, in our arrogance, treated as “human superiority”.

It's only when AI starts doing things so much better than human, beats human so supremely decisively, when there are no doubt that AI is “head and shoulders” above human for this or that task… only then humans… redefine this particular task as “not really important”. Like it happened to go and chess, among other things.

NN training is deterministic (enough)

Posted Oct 27, 2024 16:10 UTC (Sun) by ghodgkins (subscriber, #157257) [Link]

> a non-deterministic system (neural nets)

In my understanding, training an neural net is deterministic in the sense that matters for reproducibility. If you train the same model architecture in the same environment with the same data, you'll get the same final weights. This is true even if you draw from random distributions during training, as long as you choose the same seed(s) for the PRNG.

The input-output mapping of the trained model is usually also deterministic, except for some special-purpose stochastic models. Even those you may be able to make reproducible by fixing the PRNG seed, as above.

> The trained neural net (biological or digital) is its own qualia, which exists independent of the influences that trained it and it owes nothing to those influences.

It is not true that the weights "owe nothing" to the training data. As mentioned above, for a fixed PRNG seed, they are in fact a very complex closed-form function of that data - certainly "dependent" in the probability sense.

> By way of analogy - demanding a full accounting of the training material to satisfy an openness requirement is like demanding that you provide a full accounting of everything you were exposed to since birth before we allow you to operate freely in open society.

I think it's reasonable to have different expectations for software tools and the people that use them, and honestly kind of absurd not to.

> The very idea is absurd.

For humans, certainly. One key difference between humans and NNs here is that NNs have a thing called "training" with well-defined inputs and output, in a consistent and well-defined format, which makes enumerating the training data entirely feasible.

> We cannot possibly know everything that is going on in your head

But we can know everything that is going on inside a NN, although we may not be able to interpret it with respect to the inputs and outputs.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds