|
|
Subscribe / Log in / New account

Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)

Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)

Posted May 8, 2023 13:59 UTC (Mon) by excors (subscriber, #95769)
In reply to: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) by leromarinvit
Parent article: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)

For comparison, ChatGPT and Bard both give me the correct answer to that prompt. But if I ask "Is 3×4 greater than 5?", ChatGPT says "No, 3×4 is equal to 12, and 12 is not greater than 5. In fact, 12 is greater than 5 times 2, which is another way of thinking about the comparison", and Bard says "No, 3 x 4 is not greater than 5. 3 x 4 = 12, which is less than 5".

They know how to construct sentences that sound like an answer to the question, but they're not backed by an understanding of the subject being discussed, so their answer is often factually wrong or logically incoherent. In this case it's easy to spot that they're wrong, but I don't see any reason to believe they won't make similarly severe mistakes in more complicated situations where it's much harder for the reader to determine the correctness of an answer. A lifetime of learning to trust things written in an authoritative tone means we will be easily misled by algorithms that are very good at language but mediocre at everything else.


to post comments

Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)

Posted May 10, 2023 9:21 UTC (Wed) by anselm (subscriber, #2796) [Link] (1 responses)

It is very amusing to ask ChatGPT and friends questions about a problem domain you're very familiar with and mock the subtle or not-so-subtle BS that they invariably produce. But the problem is that this is not generally how people use the Internet. People use the Internet to find out about problem domains they are not familiar with, and where they lack the capability of telling BS from truth. This is where the AI models' ability to make BS sound convincing becomes a liability.

The thing to remember is that the reason why generative AI is the big thing today is not that it works great and will solve all of our problems. The reason why generative AI is the big thing today is that even the venture capitalists have figured out by now that the previous big thing that was supposed to work great and solve all of our problems, specifically the blockchain, was full of hot air, and that a new big thing was required. Generative AI will remain the big thing exactly until the venture capitalists figure out that it, too, is full of hot air (or as the case may be, polished prose and pretty pictures) and unlikely to solve all of our problems after all, at which point the next big thing, whatever it will be, will be duly trotted out for its two years in the VC-funded limelight. (Which is not to say that generative AI will disappear; it will just be scaled down to fit those areas of application where it actually works and makes sense, which in fairness may be more extensive than can be said for the blockchain.)

Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)

Posted May 10, 2023 9:51 UTC (Wed) by paulj (subscriber, #341) [Link]

Indeed.

So ChatGPT is basically a new form of the talking-head "experts" that mass media put on to pontificate in authoritative sounding tones about the latest thing. 99.99% of the time they're talking drivel. Most of the time, just innocuous inane stuff - low-content but harmless. Problem is they mix in bullshit, but in the same plummy and authoritative tones. And so, as these talking heads have a disproportionate role in societal consensus building, their drivel can be a driver for cultural shifts and regulations sometimes.

If that analogy translates to AI LLMs, we may see a new and much more powerful form of "societal consensus forming by drivel" emerge, where these LLMs hoover up all of humanity's drivel on the internet and package it into potentially authoritative (or taken as) commentary that is then consumed by "elites" (i.e. people at the top of power structures and social hierarchies, who have an oversize influence on culture and policy - politicians, celebrities and those same talking heads).

That could have quite unexpected effects, in magnifying nonsense.

But that's not even what worries me. What worries me is the "editors" of these LLMs. I.e., the operators. It is clear they tweak these LLMs so that they are unable to give certain kinds of answers, and so that they prefer certain kinds of commentary in some answers to other kinds. These LLMs are _NOT_ just pure transformations of humanity's online drivel into distilled commentary - there is _also_ an editorial power vested in their operators.

*That* power is the one the watch out for!


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds