Consider the web in 1994
Consider the web in 1994
Posted Jul 24, 2024 7:47 UTC (Wed) by b7j0c (guest, #27559)In reply to: Consider the web in 1994 by LtWorf
Parent article: Imitation, not artificial, intelligence
AI will eventually become deeply contextual, so progress and learning over time will be reflected.
Having an expert at your side at all times will change industries.
Posted Jul 24, 2024 7:58 UTC (Wed)
by rgb (subscriber, #57129)
[Link] (10 responses)
Posted Jul 24, 2024 8:00 UTC (Wed)
by b7j0c (guest, #27559)
[Link] (9 responses)
My car is a an explosion box. My house is a sand pile. etc etc
Putting a label on something does not remove its utility.
Posted Jul 24, 2024 8:18 UTC (Wed)
by rgb (subscriber, #57129)
[Link] (4 responses)
Posted Jul 24, 2024 8:29 UTC (Wed)
by b7j0c (guest, #27559)
[Link] (3 responses)
if a CEO prematurely replaces humans with AI tech that isn't ready, the company could lose in the market
if a CEO clings to humans when automation is ready, the company could lose in the market
in a competitive landscape, society gets closer to a state is prefers through its choices
no different than any tech since the industrial revolution
Posted Jul 24, 2024 12:06 UTC (Wed)
by pizza (subscriber, #46)
[Link]
More accurately: They will go with the cheapest option they think they can get away with.
> if a CEO prematurely replaces humans with AI tech that isn't ready, the company could lose in the market
More accurately: All that matters is that the financials look good *now* and any negative consequences come "later"
Posted Jul 26, 2024 15:42 UTC (Fri)
by immibis (subscriber, #105511)
[Link] (1 responses)
Look at all the world's information being locked on Reddit, which has just closed its doors (to every consumer of that information except for Google, who pays a lot of money to not be excluded). Reddit will surely go bankrupt due to this... in some years. It's already been over a year since Reddit started down this track. Reddit has almost never been profitable, so you could argue it's been making wrong decisions for ~20 years and still not failed. Meanwhile Facebook is occupied by almost exclusively Russian bots, and still has a high stock valuation. The markets are 99.9% disconnected from reality.
Posted Jul 26, 2024 17:11 UTC (Fri)
by LtWorf (subscriber, #124958)
[Link]
Remember the deleted post that stated that a USA airbase was the most reddit addicted city?
Posted Jul 25, 2024 4:01 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (3 responses)
I think a reductionist view has value because an LLM is not an intelligence, it is a thoroughly lobotomized parody of speech and writing, that only works because people are _very_ willing to extend the theory of mind to anything making vaguely human-sounding noises, even eliza was taken to be intelligent to some people, LLMs are only a little more sophisticated, but with just great gobs of training data the LLM can spit out something plausible sounding given a wide variety of inputs.
An LLM can't train someone in programming because it doesn't know how to program and it doesn't know how to teach and it has no way to model the state of the learner and adjust a pedagogy based on a feedback loop consulting hundreds or thousands of different neural networks in your brain. An LLM hardly has any feedback loops of any kind and barely one network.
An LLM can spit out text real good but it doesn't have enough varied systems to have the intelligence of a fruit fly. All it has are the strings of symbols that people have used to communicate with each other and with the computer and all it can do is combine those symbols in abstract and pleasing ways. People will invent a theory of mind when talking to it, but people are capable of anthropomorphism far less animate objects than computers.
It may well be that some day an artificial mind is created, it's not like brains are made out of magic unobtainium, they obey and rely on the physics of the same universe that allows computers to function, but there is just nowhere near the complexity and interlocking learning and feedback systems present in modern "AI" that are present in _any_ vertebrate, let alone human intelligence.
Posted Jul 25, 2024 16:33 UTC (Thu)
by rgb (subscriber, #57129)
[Link]
Posted Jul 25, 2024 19:47 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
The scary thing is there isn't the feedback in modern AI that is present in very basic lifeforms - even the humble fruitfly. And even the human brain probably has less raw processing power than a 6502! (Dunno how much we do have, but I know we achieve much more, with much less resource, than a computer).
Cheers,
Posted Jul 26, 2024 15:43 UTC (Fri)
by immibis (subscriber, #105511)
[Link]
Posted Jul 24, 2024 13:47 UTC (Wed)
by paulj (subscriber, #341)
[Link] (5 responses)
https://web.archive.org/web/20240101002023/https://www.ny...
(Example given in the "ChatGPT is bullshit" paper, which someone else had linked to in the Zuck / Meta LLaMa article).
Posted Jul 24, 2024 14:32 UTC (Wed)
by Paf (subscriber, #91811)
[Link] (4 responses)
Most tutorials online are also a bit wrong or out of date. The LLM synthesizes from many and is generally, in my experience, stronger and more accurate than most individual articles.
It’s easy to say things like Bayesian parrot, but whatever label you attach they are in practice *really good* at this stuff. That’s from substantial personal experience.
Posted Jul 24, 2024 17:10 UTC (Wed)
by legoktm (subscriber, #111994)
[Link] (2 responses)
Posted Jul 25, 2024 4:08 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (1 responses)
Posted Jul 29, 2024 0:27 UTC (Mon)
by Paf (subscriber, #91811)
[Link]
Posted Jul 25, 2024 10:02 UTC (Thu)
by paulj (subscriber, #341)
[Link]
It's a big IF though. Cause some people who are not experts will use it to produce reams of material that /look plausible/. Worse, people who are experts but are a bit lazy (or are doing their "homework" a bit too late) may /fail/ to do the review, correct and tweak step and instead present the AI generated bullshit as the work of their own expertise! (The article I linked to being a case in point - how much money did the defendant have to expend on legal fees to prove it was bullshit! Had they not had those resources, they may have had to fold and settle or lost the case - and then been on the hook for the costs!).
So yes, useful. With caveats. And the problem is some will be ignorant of or just ignore the caveats - like in the article!
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
The economy is screwed
The economy is screwed
Consider the web in 1994
Consider the web in 1994
BTW here is a related piece I couldn't agree more with:
https://www.theatlantic.com/technology/archive/2024/07/op...
Consider the web in 1994
Wol
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
I'll echo this. LLMs are, to my surprise, quite good at generating code, and crucially, code is something we have extensive tooling and practices to verify the correctness of.
Consider the web in 1994
I feel people are falling into the AI effect trap: "The AI effect occurs when onlookers discount the behavior of an artificial intelligence program as not 'real' intelligence." No, LLMs are not anywhere close to human intelligence, that doesn't stop them from being quite useful regardless.
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
