LWN: Comments on "Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)" https://lwn.net/Articles/930939/ This is a special feed containing comments posted to the individual LWN article titled "Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)". en-us Fri, 03 Oct 2025 12:17:00 +0000 Fri, 03 Oct 2025 12:17:00 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/932890/ https://lwn.net/Articles/932890/ flussence <div class="FormattedComment"> Assuming they aren't inventing an entire language that only happens to resemble English here (a big ask, I know): "tokens" should at least be one we're familiar with from strtok(3).<br> </div> Wed, 24 May 2023 19:38:40 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/932889/ https://lwn.net/Articles/932889/ flussence <div class="FormattedComment"> <span class="QuotedText">&gt; That doesn't make the tool pointless though. Nobody can make money selling web-browsers either; even though Opera sure tried. But that fact can't be used to justify a claim that a browser isn't an important and useful piece of software.</span><br> <p> Roads aren't a profitable business, but nobody in the real world is advocating for everyone to have a rollercoaster in their back yard.<br> </div> Wed, 24 May 2023 19:36:07 +0000 Human extinction from alignment problems https://lwn.net/Articles/932087/ https://lwn.net/Articles/932087/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; I think you're making a mistake in assuming management is intelligent ... managers typically have high EQ but low IQ - they are good at manipulating people, but poor at thinking through the consequences of their actions.</span><br> <p> I don't think that's fair, or accurate.<br> <p> It's probably much more accurate to state that, for most management in large-ish organizations, the incentives in place reward very short-term gains at the cost of long-term consequences. So management is rationally optimizing for what gives them the most benefit.<br> <p> <p> </div> Wed, 17 May 2023 12:10:16 +0000 Human extinction from alignment problems https://lwn.net/Articles/932073/ https://lwn.net/Articles/932073/ ras <div class="FormattedComment"> <span class="QuotedText">&gt; I think you're making a mistake in assuming management is intelligent</span><br> <p> Guilty as charged. But in the (smallish, successful, run by the person who founded them) companies I've been associated with, the top level people have always been smart. Not perhaps as good as a top engineer at abstract reasoning, but they definitely much better than most at thinking through the consequences of actions and planing accordingly.<br> <p> Your characterisation does seem accurate for the middle management in large organisations. When the smallish organisation I worked for got taken over by $4B company, I got to experience what it was like to for for middle "IT" management. After 2 years I could not stand it any more, and resigned.<br> <p> <span class="QuotedText">&gt; they are good at manipulating people</span><br> <p> Yes, but AI's can be too - as demonstrated by AI's being besting humans at playing diplomacy. In fact that seems to imply an AI can be better at manipulating people than people are. So future AI's could have both higher EQ's and IQ's than most humans, and have the extraordinary general knowledge ChatGPT displays. But to be useful they would have to be trained continuously as new conditions arise. The CPU and power requirements would be enormous - so big you could only justify it for something like a CEO of a large company.<br> <p> Compared to a human CEO an AI CEO would know of every aspect of the companies operations and everybody's contribution - even in a company with 10's of thousands of employees. (A thing that amazes me about ChatGPT is it's breadth of knowledge. Ask it about some question that could only be covered on a few obscure pages on the internet - and it often knows about it. I find it amazing that a lot of the information on the internet can be condensed into "mere" trillion 16 bit floats.) I'm guessing such breath of knowledge about the company would give it an enormous advantage over a human. Why would it need middle management, for a start?<br> <p> Where merely copying a AI works you get away with sharing the training expense over a lot of instances. That's what may happen for taxi drivers and other "cookie cutter" jobs. I'm not sure programming and other engineering jobs fall into the same class. Good programmers have a lot of domain knowledge about the thing they are writing software for, which means the cookie cutter approach doesn't work so well.<br> </div> Tue, 16 May 2023 22:47:17 +0000 Human extinction from alignment problems https://lwn.net/Articles/932037/ https://lwn.net/Articles/932037/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; So the ideal task is something that generates large rewards for intelligence. That doesn't sound like a taxi driver, programmer or writer. The thing that seems to fit the bill best is ... replacing upper management.</span><br> <p> I think you're making a mistake in assuming management is intelligent ... managers typically have high EQ but low IQ - they are good at manipulating people, but poor at thinking through the consequences of their actions.<br> <p> Mind you, given that studies show that paying over-the-top bucks to attract talent is pouring money down the drain (your typical new CEO - no matter their pay grade - typically underperforms for about 5 years), an AI might actually be a good replacement.<br> <p> Cheers,<br> Wol<br> </div> Tue, 16 May 2023 13:20:44 +0000 Human extinction from alignment problems https://lwn.net/Articles/931954/ https://lwn.net/Articles/931954/ ras <div class="FormattedComment"> <span class="QuotedText">&gt; Long before AI is good enough to independently take over the world, it might be good enough that management can fire most of the programmers, writers, artists, and middle management,</span><br> <p> I'm not sure about that. I don't think there is much doubt in time AI will be able to any "thinking" job better than a human, given they already do a lot of things better than humans now. Their one downside is the enormous cost of training and running. So the ideal task is something that generates large rewards for intelligence. That doesn't sound like a taxi driver, programmer or writer. The thing that seems to fit the bill best is ... replacing upper management.<br> <p> So my prediction for where we end up is - we all work for AI's whose loss function (the thing they are trying to optimise) is to maximise profit.<br> </div> Tue, 16 May 2023 00:06:45 +0000 Much ado about *censored* https://lwn.net/Articles/931563/ https://lwn.net/Articles/931563/ paulj <div class="FormattedComment"> I think it's a bit more nuanced than that. It isn't just customers they don't want to offend, they also may wish to curry favour with powerful interests. E.g., the state, because the tech companies wish to keep in the good books of the state so as to either obtain or avoid regulations that are or are not in their self-interest. So tech companies may well censor material that goes against the government lines - be that political speech or even objective scientific factual material. This may happen at scales that it has a significant distorting effect on public discourse, and so affects important public policy (either by changing what is implemented, or delaying required change).<br> <p> tl;dr: Tech companies have a demonstrated tendency to apply editorial control over both their own output, and what they allow users' to publish, to curry favour with the state - and other powerful interests.<br> <p> This is not at all done in the interests of customers. And (whatever they say) it is not in the interest a of healthy civil society. <br> </div> Thu, 11 May 2023 09:12:08 +0000 Human extinction from alignment problems https://lwn.net/Articles/931542/ https://lwn.net/Articles/931542/ JoeBuck <div class="FormattedComment"> I don't think that this is the near-term threat. Long before AI is good enough to independently take over the world, it might be good enough that management can fire most of the programmers, writers, artists, and middle management, have AI replace their functions, have a skeleton crew to clean up any problems in what the AI generates, and the stockholders keep all the money.<br> </div> Wed, 10 May 2023 22:14:32 +0000 Much ado about *censored* https://lwn.net/Articles/931535/ https://lwn.net/Articles/931535/ JoeBuck <div class="FormattedComment"> Companies aren't interested in preventing their models from spewing offensive garbage because they care about "wokeness" more than profit. Their motivation is precisely that they care about profit. Offending most of your customers isn't profitable.<br> <p> </div> Wed, 10 May 2023 21:05:13 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931494/ https://lwn.net/Articles/931494/ Baughn <div class="FormattedComment"> ChatGPT (-4 or otherwise) has been trained to be relentlessly optimistic and nice, so unless you explicitly ask it to critique you, it isn't going to do it. In short: It's a bullshitting yes-man. If you're aware of that, you can still get good value from it.<br> </div> Wed, 10 May 2023 14:41:20 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931448/ https://lwn.net/Articles/931448/ paulj <div class="FormattedComment"> Indeed.<br> <p> So ChatGPT is basically a new form of the talking-head "experts" that mass media put on to pontificate in authoritative sounding tones about the latest thing. 99.99% of the time they're talking drivel. Most of the time, just innocuous inane stuff - low-content but harmless. Problem is they mix in bullshit, but in the same plummy and authoritative tones. And so, as these talking heads have a disproportionate role in societal consensus building, their drivel can be a driver for cultural shifts and regulations sometimes.<br> <p> If that analogy translates to AI LLMs, we may see a new and much more powerful form of "societal consensus forming by drivel" emerge, where these LLMs hoover up all of humanity's drivel on the internet and package it into potentially authoritative (or taken as) commentary that is then consumed by "elites" (i.e. people at the top of power structures and social hierarchies, who have an oversize influence on culture and policy - politicians, celebrities and those same talking heads).<br> <p> That could have quite unexpected effects, in magnifying nonsense.<br> <p> But that's not even what worries me. What worries me is the "editors" of these LLMs. I.e., the operators. It is clear they tweak these LLMs so that they are unable to give certain kinds of answers, and so that they prefer certain kinds of commentary in some answers to other kinds. These LLMs are _NOT_ just pure transformations of humanity's online drivel into distilled commentary - there is _also_ an editorial power vested in their operators.<br> <p> *That* power is the one the watch out for!<br> </div> Wed, 10 May 2023 09:51:59 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931445/ https://lwn.net/Articles/931445/ anselm <p> It is very amusing to ask ChatGPT and friends questions about a problem domain you're very familiar with and mock the subtle or not-so-subtle BS that they invariably produce. But the problem is that this is not generally how people use the Internet. People use the Internet to find out about problem domains they are <em>not</em> familiar with, and where they lack the capability of telling BS from truth. This is where the AI models' ability to make BS sound convincing becomes a liability. </p> <p> The thing to remember is that the reason why generative AI is the big thing today is not that it works great and will solve all of our problems. The reason why generative AI is the big thing today is that even the venture capitalists have figured out by now that the previous big thing that was supposed to work great and solve all of our problems, specifically the blockchain, was full of hot air, and that a new big thing was required. Generative AI will remain the big thing exactly until the venture capitalists figure out that it, too, is full of hot air (or as the case may be, polished prose and pretty pictures) and unlikely to solve all of our problems after all, at which point the next big thing, whatever it will be, will be duly trotted out for <em>its</em> two years in the VC-funded limelight. (Which is not to say that generative AI will disappear; it will just be scaled down to fit those areas of application where it actually works and makes sense, which in fairness may be more extensive than can be said for the blockchain.) </p> Wed, 10 May 2023 09:21:05 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931444/ https://lwn.net/Articles/931444/ anselm <blockquote><em>these things have an understanding of language(s) syntax and grammar - including programming languages</em></blockquote> <p> If you call being able to figure out, given a set of words, which word is most likely coming next – according to the training data they've seen – “an understanding”, then yes. “These things” essentially provide autocompletion on steroids. </p> Wed, 10 May 2023 09:02:51 +0000 Much ado about *censored* https://lwn.net/Articles/931427/ https://lwn.net/Articles/931427/ Qemist <div class="FormattedComment"> <span class="QuotedText">&gt; Now I have an AI on my 5800X + 2080Ti, /absolutely unfiltered/, citing and referencing *the* most offensive pieces of taboo research whenever I ask it to.</span><br> <p> Sounds great! Where's the best place to learn about how to do this?<br> <p> <span class="QuotedText">&gt; . It's stuff that could never, ever be censored now, all available on torrents to deploy from, and eager to start spitting tokens, once it's fully mmap()ped. LOL.</span><br> <p> Careful. Kamala Harris is coming for you!<br> <p> <a href="https://www.bizpacreview.com/2023/05/05/kamala-harris-tabbed-as-artificial-intelligence-czar-and-the-jokes-write-themselves-1356342/">https://www.bizpacreview.com/2023/05/05/kamala-harris-tab...</a><br> </div> Tue, 09 May 2023 23:49:51 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931415/ https://lwn.net/Articles/931415/ eduperez <div class="FormattedComment"> Even if the answers are technically wrong, they are terrifyingly human-like.<br> </div> Tue, 09 May 2023 17:40:04 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931308/ https://lwn.net/Articles/931308/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; Further, human languages (and likely some others) are loosely defined, and "correct" idiom is often simply "what do most people say?"</span><br> <p> Which languages are those? Are you talking about English? Or English? Or English? What about German or German?<br> <p> And then you throw jargon into the mix. Where "correct" idiom may be a matter of using the wrong idiom will seriously stuff you over. Like using street idiom in a law court. Or using the wrong variant of English. Or using a language typically described as English but which is incomprehensible even to the English themselves ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 09 May 2023 11:54:41 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931305/ https://lwn.net/Articles/931305/ paulj <div class="FormattedComment"> And it's worth pointing out that we have long had a formal understanding of syntax and grammer, and an ability to programme machines well-specified languages and grammers. <br> <p> Further, human languages (and likely some others) are loosely defined, and "correct" idiom is often simply "what do most people say?" - something that statistical models, including analysis of how words and word sequences cluster in very high-dimensional spaces (see other sibling comment), can be applied to answering. ;)<br> </div> Tue, 09 May 2023 09:43:55 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931301/ https://lwn.net/Articles/931301/ oldtomas <div class="FormattedComment"> Somehow I managed to miss the elephant in the room: besides politics and art there's Big Ad, arguably the fuel feeding this dumpster fire.<br> </div> Tue, 09 May 2023 05:32:20 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931298/ https://lwn.net/Articles/931298/ oldtomas <div class="FormattedComment"> In short: it is making up bullshit.<br> <p> Humans do that too, but we usually consider that a liability. Except, perhaps, in politics and sometimes in art (in the second case there are other subtle constraints at some meta level).<br> </div> Tue, 09 May 2023 04:44:02 +0000 Much ado about *censored* https://lwn.net/Articles/931285/ https://lwn.net/Articles/931285/ mathstuf <div class="FormattedComment"> While I'm sure things would be "fine" in the long-run, WhatsApp going away would leave a *lot* of people without ways of communicating with family overseas since that is a nice way to get free world-wide video and voice calls. The rest would cause some disruption, but nothing on the level of WhatsApp going away. (I've no lost love for the thing, but I also cannot deny its usefulness).<br> </div> Mon, 08 May 2023 19:04:36 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931283/ https://lwn.net/Articles/931283/ excors <div class="FormattedComment"> That also highlights many limitations. ChatGPT says "Using braces [...] allows for more flexibility in terms of class and id attributes" (after a prompt showing the class and id attributes), which sounds like the kind of thing someone would say about a language, except it's wrong in this case - the proposed syntax is *less* flexible because you can't use '.' or '{' in an id. It repeatedly comments on the use of indentation, even though that's identical between the HTML and HBML and is not a necessary part of either syntax. It repeatedly praises the new syntax's readability, without considering what happens when you need lots of &lt;i&gt;inline markup&lt;/i&gt; (which I think will be much uglier than HTML), and without considering the common case of text containing double quotes. It doesn't notice that the user proposes an implicit &lt;html lang="en"&gt; when converting to HTML, which will cause problems for non-English documents.<br> <p> Its "more complicated example" repeats the user's typo of "uft-8", and it doesn't try to demonstrate that the 'div' is optional even though that was one of the main features discussed earlier. It says the example includes "inline styles" (which means style="" attributes), which is wrong.<br> <p> It's true that it's creating novel output - it's not simply copying-and-pasting stuff from the web, it's merging the user's input with some existing knowledge of simple HTML documents (in particular it looks very much like it has adapted the example from <a href="https://www.karrabi.com/samplepage/">https://www.karrabi.com/samplepage/</a>) to create something new and relevant, which is still an impressive capability. But most of what it's doing is echoing and rephrasing ideas from the user's prompts, adding vague and often incorrect observations of its own, and failing to provide any real insight. It's pretending to have much greater understanding than it really does.<br> </div> Mon, 08 May 2023 18:58:34 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931279/ https://lwn.net/Articles/931279/ eduperez <div class="FormattedComment"> Please, have a look at this interaction, and see it yourself:<br> <a rel="nofollow" href="https://www.reddit.com/r/ProgrammerHumor/comments/zd8ljb/i_taught_the_chat_bot_an_alternative_syntax_for/">https://www.reddit.com/r/ProgrammerHumor/comments/zd8ljb/...</a><br> <p> <p> </div> Mon, 08 May 2023 16:59:24 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931261/ https://lwn.net/Articles/931261/ paulj <div class="FormattedComment"> Excellent observation.<br> <p> Reasoning by analogy is something I have long tried to point out as being a logical fallacy (e.g. "argumentum ad vehiculum"). I don't consider it reasoning really. ;)<br> <p> The way I look at it, these things have an understanding of language(s) syntax and grammar - including programming languages - but they have 0 understanding of the problem domains people are asking questions of. They appear, given the examples I've read, to produce (grammatically correct) mish-mashes of text related to the problem domain, more or less. Sometimes right, based on the input and the problem domain and the intrinsic chance of mish-mashes to be right.<br> </div> Mon, 08 May 2023 14:23:48 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931267/ https://lwn.net/Articles/931267/ apoelstra <div class="FormattedComment"> <span class="QuotedText">&gt;Does "it" have this ability to do things like... give you shell code for git tasks - or did it just give you the answer that someone else figured out and put on the Internet, which was then fed into its matrices, for it to regurgitate to you, and without credit to the original author?</span><br> <p> Yes, it can give you shell code that does not exist elsewhere on the Internet.<br> <p> <span class="QuotedText">&gt;(My understanding is LLMs do not have the ability to reason).</span><br> <p> In 2013 all the rage in ML was word2vec[1] which was much simpler than LLMs are today and maybe can illustrate how you could get "reasoning" out of something that is ultimately just shunting symbols and probabilities around. The idea was that you could take a word, embed it into a high-dimensional vector space (using some sort sketching neural-net classification scheme or whatever to define the embeddinng) and then you can do linear algebra on these things and get meaningful results out even though the specific embeddings are just opaque collection of numbers.<br> <p> For example, you could take "king" minus "queen" and get some sort of object that somehow represented the difference between a king and queen. If you then added "princess" to that object, you'd get "prince" (or something very close to it). If you added "girl" you'd get "boy". Et cetera.<br> <p> When this came out it was quite jarring and surprising that computers could do this sort of "analogous" reasoning on "human concepts". But it was also simple enough that you could see exactly what was going on.<br> <p> To extend this to something resembling a LLM, you can imagine doing multiple "layers" of such embeddings, where each layer in like a "higher layer of abstraction", and if you wave your hands and squint enough, and consider that there are literally trillions of datapoints going into training these things, it's not too shocking that you'd get a machine that "knows how to use sed" in a useful sense.<br> <p> BTW vector embeddings are still used in LLMs. You can find Python code to play with this sort of stuff here [2] though if you want to do it locally there's a bit of setup involved.<br> <p> [1] <a href="https://en.wikipedia.org/wiki/Word2vec">https://en.wikipedia.org/wiki/Word2vec</a><br> [2] <a href="https://python.langchain.com/en/latest/modules/models/text_embedding.html">https://python.langchain.com/en/latest/modules/models/tex...</a><br> <p> </div> Mon, 08 May 2023 14:18:23 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931260/ https://lwn.net/Articles/931260/ excors <div class="FormattedComment"> For comparison, ChatGPT and Bard both give me the correct answer to that prompt. But if I ask "Is 3×4 greater than 5?", ChatGPT says "No, 3×4 is equal to 12, and 12 is not greater than 5. In fact, 12 is greater than 5 times 2, which is another way of thinking about the comparison", and Bard says "No, 3 x 4 is not greater than 5. 3 x 4 = 12, which is less than 5".<br> <p> They know how to construct sentences that sound like an answer to the question, but they're not backed by an understanding of the subject being discussed, so their answer is often factually wrong or logically incoherent. In this case it's easy to spot that they're wrong, but I don't see any reason to believe they won't make similarly severe mistakes in more complicated situations where it's much harder for the reader to determine the correctness of an answer. A lifetime of learning to trust things written in an authoritative tone means we will be easily misled by algorithms that are very good at language but mediocre at everything else.<br> </div> Mon, 08 May 2023 13:59:21 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931213/ https://lwn.net/Articles/931213/ leromarinvit <div class="FormattedComment"> I asked a few of the models mentioned in the article (via <a href="https://chat.lmsys.org/">https://chat.lmsys.org/</a>):<br> <p> <span class="QuotedText">&gt; 3×4×5 =</span><br> koala-13b: The result of 3 times 4 times 5 is 3,000.<br> oasst-pythia-12b: Your question seems malformed, however the answer remains the same. 4×5 is equal to 20. (It politely added "I hope this is helpful" when asked once more.)<br> fastchat-t5-3b: The calculation 345 is equal to 9 because 3 and 4 are both multiples of 5. (2nd try: The sum of 345 is 9.)<br> <p> The others gave numeric results - some correct, some not. Interestingly, I got the the same two incorrect results from two models each (120 and 9). I guess the common heritage shows.<br> </div> Mon, 08 May 2023 13:11:35 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931210/ https://lwn.net/Articles/931210/ ballombe <div class="FormattedComment"> They reason analogically, but not analytically.<br> A bit like the human brain, that why it is so hard to teach mathematics.<br> But the problem is that they have completely disproportionate analogical reasoning ability versus analytical reasoning ability compared to a human being. This creates false expectation.<br> <p> They have no notion of absolute truth, that is whether "3×4x5 = 15" is true or not does not depend on the history of<br> interaction with the user. This makes them a dangerous tool.<br> </div> Mon, 08 May 2023 13:00:03 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931211/ https://lwn.net/Articles/931211/ paulj <div class="FormattedComment"> Reasoning, or just cutting things up and throwing things together again in different ways that appear to fit with the general rules of the underlying language(s)?<br> </div> Mon, 08 May 2023 12:33:39 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931209/ https://lwn.net/Articles/931209/ roc <div class="FormattedComment"> They can definitely do some reasoning.<br> <p> Yes, they'll learn from the text they've seen online, but it's easy enough to get LLMs to correctly answer questions which are different from any in the training text.<br> </div> Mon, 08 May 2023 12:18:24 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931208/ https://lwn.net/Articles/931208/ adobriyan <div class="FormattedComment"> <span class="QuotedText">&gt; LLM</span><br> <p> Large Lying Model.<br> </div> Mon, 08 May 2023 11:38:42 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931207/ https://lwn.net/Articles/931207/ paulj <div class="FormattedComment"> Does "it" have this ability to do things like... give you shell code for git tasks - or did it just give you the answer that someone else figured out and put on the Internet, which was then fed into its matrices, for it to regurgitate to you, and without credit to the original author?<br> <p> (My understanding is LLMs do not have the ability to reason).<br> </div> Mon, 08 May 2023 10:06:29 +0000 Human extinction from alignment problems https://lwn.net/Articles/931206/ https://lwn.net/Articles/931206/ ssokolow <p>Robert Miles has a good video named <a rel="nofollow" href="https://www.youtube.com/watch?v=zkbPdEHEyEI">We Were Right! Real Inner Misalignment</a> which really drives home the problem of misalignment, not in terms of politics, but in terms of how difficult it is to be sure that these systems have <i>actually</i> learned what you tried to train. <p>...and it's got this great comment: <blockquote>Turns out the Terminator wasn’t programmed to kill Sarah Connor after all, it just wanted clothes, boots and a motorcycle.<br> <cite>-- <a rel="nofollow" href="https://www.youtube.com/watch?v=zkbPdEHEyEI&lc=UgwPZFa7r2VU2tzuIvF4AaABAg">Luke Lucos</a></cite></blockquote> Mon, 08 May 2023 09:34:39 +0000 Much ado about *censored* https://lwn.net/Articles/931205/ https://lwn.net/Articles/931205/ ssokolow <blockquote>Ever since about the 1980s for some reason governments have refrained from any meaningful regulation especially on anything IT and especially proactively.</blockquote> Probably some mix of... <ol> <li>The 1976 Buckley v. Valeo and 1978 First National Bank of Boston v. Bellotti cases effectively legalized bribing politicians in the U.S.</li> <li>The U.S. has a lot of influence</li> <li>Politicians lean toward being out-of-touch dinosaurs on technical subjects, giving lobbyists in any country more room to skew their perception without them realizing it.</li> </ol> Mon, 08 May 2023 09:23:40 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931198/ https://lwn.net/Articles/931198/ oldtomas <div class="FormattedComment"> My take on that is: the Big Ones (Google, Facbook [1]) were doing LLMs for quite a while. (Possibly) for risk aversion reasons they didn't dare to put it "out there".<br> <p> OpenAI, the hungry new entrant, and fuelled by risk capital, didn't have that much to lose and just started the ChatGPT fireworks to dazzle the peasantry (and attract more risk capital, which obviously worked).<br> <p> But they are extracting another value out of it: all this engagements, all this trying to bypass safeguards (ChatGPT's evil twin, yadda, yadda) are extremely valuable feedback training data no paid-for team whithin OpenAI could have come with. And for free!<br> <p> This is "human computing", as Luis von Ahn [2] described it while working at... Google (remember Pokémon Go?).<br> <p> Someone at Meta,pissed off by them being undercut in that way by OpenAI (or somone else higher up at meta scared by Microsoft outpacing them) just orchestrated that leak, to shake the playing field.<br> <p> I'd ask ChatGPT whether that is what happened -- but there is too much Microsoft over it and I get skin rashes.<br> <p> [1] Or Alphabet and Meta, or however they are called this week.<br> [2] <a rel="nofollow" href="https://en.wikipedia.org/wiki/Luis_von_Ahn">https://en.wikipedia.org/wiki/Luis_von_Ahn</a><br> </div> Mon, 08 May 2023 06:01:18 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931177/ https://lwn.net/Articles/931177/ ballombe <div class="FormattedComment"> Exactly, I tried elementary mathematics questions and they give plausibly sounding but completely incorrect answers.<br> <p> (for example, I managed to get one to say that 3×4×5 = 15)<br> <p> I cannot wait for students so complain that their wrong answer should be counted as correct since the LLM of the day gives the same result.<br> </div> Sun, 07 May 2023 21:14:56 +0000 Much ado about *censored* https://lwn.net/Articles/931178/ https://lwn.net/Articles/931178/ roc <div class="FormattedComment"> There's a lot of stuff in between like "help me hack all the world's computers", "help me make poison" [1], or "help me convince lots of people to give away their money and commit suicide" [2]. Major American corporations judge those topics controversial and want to avoid giving advice on those topics or facilitating them via an AutoGPT-style agentic loop, and these are also potential building blocks for a world-dominating AI. Believe me, it's awkward to be in the position of having to decide where to put guardrails, but "there should be no guardrails" does not sound good to me. Government regulation might be better than corporate self-governance, but a lot of people who fear corporate guardrails aren't much happier about governments.<br> <p> [1] <a href="https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx">https://www.theverge.com/2022/3/17/22983197/ai-new-possib...</a><br> [2] <a href="https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says">https://www.vice.com/en/article/pkadgm/man-dies-by-suicid...</a><br> </div> Sun, 07 May 2023 21:03:25 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931166/ https://lwn.net/Articles/931166/ ekj <div class="FormattedComment"> You're joking right? Or deliberately trolling?<br> <p> It's true of course that this is all very new software, so it remains to be seen both how much better it can become as it matures, and exactly how much of a change it will mean for which parts of society. But to describe it merely as text compression seems silly to me. It might end up changing the entire world, or it might end up being a somewhat more limited tool useful for a smaller set of scenarios. But it's pretty clear that it'll have some impact.<br> <p> Whether or not Google can make money on it is a different question; the author of the leaked memo is right that peoples willingness to pay for access to a given tool, is very limited in a situation where similarly good tools are available as free Open Source.<br> <p> That doesn't make the tool pointless though. Nobody can make money selling web-browsers either; even though Opera sure tried. But that fact can't be used to justify a claim that a browser isn't an important and useful piece of software.<br> </div> Sun, 07 May 2023 13:31:11 +0000 Much ado about *censored* https://lwn.net/Articles/931162/ https://lwn.net/Articles/931162/ ekj <div class="FormattedComment"> I don't think it's exactly the same as that. We have millenia worth of experience with children, and we know that their capabilities as adults are going to be roughly comparable to the previous generation, and that this stuff moves on a glacial pace; a single generation is 2-3 decades long.<br> <p> AI in contrast, is rapidly improving and it's plausible that near-future AI will have the capability of improving itself. Even if it does not, we know that computing-hardware grows in power very rapidly so that what's possible a decade from now is several orders of magnitude more computationally intense than what's possible today.<br> <p> Which means it's possible (though I'd not say "likely") that AI will turn the entire world on its head within a decade. The same thing isn't true for children, instead we can be reasonably sure that the kids today will grow up to be adults that are not *that* vastly different from the previous generation.<br> </div> Sun, 07 May 2023 12:41:09 +0000 Much ado about *censored* https://lwn.net/Articles/931160/ https://lwn.net/Articles/931160/ ekj <div class="FormattedComment"> There's very different ways of "holding back" AIs though. There's trying to reduce the risk of a runaway process of self-improvement that'd potentially very quickly lead to AI being the new master of earth. And then there's ChatGPT giving me boilerplate-nonsens and refusing to produce text on a given topic if a major American corporation judges the topic controversial and therefore potentially harmful to shareholders.<br> <p> Good Open Source alternatives is a good way of putting limits on the latter, since if Google or other big companies shackled their AIs in ways users dislike, and the best Open Source alternatives are reasonably close in quality; the users would just defect.<br> <p> <p> <p> <p> </div> Sun, 07 May 2023 12:11:44 +0000 Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis) https://lwn.net/Articles/931157/ https://lwn.net/Articles/931157/ oldtomas <div class="FormattedComment"> My issue with all that is that people start believing such system's output face value. And marketeers, desperate for investor capital, will further that perception (without guaranteeing anything, of course).<br> <p> Decision makers will *replace*, not *augment* people with it.<br> <p> It's a bit like the current generation of not-quite-autonomous cars. You can let it do things, but in difficult situations you've got to take over. Yeah, right.<br> <p> See, I don't want to be the one to debug a more complex program written by an unsupervised LLM. I find that tough with other people, but there I have a fighting chance of slowly building a mental model of the people who wrote the program in the first place, to serve me as a guide.<br> <p> As far as I see that today, those LLMs will excel at exactly one thing: at creating fake narratives and "alternative facts" [1] faster than we can read them, let alone double-check them.<br> <p> [1] <a rel="nofollow" href="https://en.wikipedia.org/wiki/Alternative_facts">https://en.wikipedia.org/wiki/Alternative_facts</a><br> </div> Sun, 07 May 2023 07:00:00 +0000