Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)
Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)
Posted May 10, 2023 9:51 UTC (Wed) by paulj (subscriber, #341)In reply to: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) by anselm
Parent article: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)
So ChatGPT is basically a new form of the talking-head "experts" that mass media put on to pontificate in authoritative sounding tones about the latest thing. 99.99% of the time they're talking drivel. Most of the time, just innocuous inane stuff - low-content but harmless. Problem is they mix in bullshit, but in the same plummy and authoritative tones. And so, as these talking heads have a disproportionate role in societal consensus building, their drivel can be a driver for cultural shifts and regulations sometimes.
If that analogy translates to AI LLMs, we may see a new and much more powerful form of "societal consensus forming by drivel" emerge, where these LLMs hoover up all of humanity's drivel on the internet and package it into potentially authoritative (or taken as) commentary that is then consumed by "elites" (i.e. people at the top of power structures and social hierarchies, who have an oversize influence on culture and policy - politicians, celebrities and those same talking heads).
That could have quite unexpected effects, in magnifying nonsense.
But that's not even what worries me. What worries me is the "editors" of these LLMs. I.e., the operators. It is clear they tweak these LLMs so that they are unable to give certain kinds of answers, and so that they prefer certain kinds of commentary in some answers to other kinds. These LLMs are _NOT_ just pure transformations of humanity's online drivel into distilled commentary - there is _also_ an editorial power vested in their operators.
*That* power is the one the watch out for!