Human extinction from alignment problems
Human extinction from alignment problems
Posted May 6, 2023 4:48 UTC (Sat) by roc (subscriber, #30627)In reply to: Human extinction from alignment problems by david.a.wheeler
Parent article: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)
For quite a long time people like LeCun said "why would an AI want to take over the world or destroy humanity? That's ridiculous." Turns out one answer is "because people will ask it to, for the lulz if for no other reason" --- see ChaosGPT.
So I don't think we're going to reach a state where paperclip-maximizer misalignment is the crucial problem. That issue is going to be swamped by people providing their own bad goals.
Like other LWN readers I'm a dyed-in-the-wool open source enthusiast in general, but here I feel like it's going to be more like open-source nukes-for-all. I am not enthusiastic about that.
