Much ado about *censored*
Much ado about *censored*
Posted May 5, 2023 1:35 UTC (Fri) by Rudd-O (guest, #61155)Parent article: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)
The AI honchos at Google must have spent a lot of time — *years*, in my estimation — debating the finer points of how to stifle their AIs so they wouldn't produce text that challenges critical consciousness dogmas (their euphemism for this is "responsible AI"). I bet they were very careful to ensure their AI would be very circumspect when users queried any of the terms in their //google3 search term shitlist (I knew of the shitlist and how it's used to thumb public opinion in certain ways -- I resigned a few years ago.)
Prominent scholars like Timnit Gebru and Eliezer Yudkowski sowed epic amounts of discord and noise in the discourse around AI, slowing practical progress down for years.
Tens of thousands of Bay Area engineer-hours of "alignment" poured into making sure that AI won't ever say taboos.
OpenAI even made a 180° about-face to closed source, closed models, pay up and our models will still not truthfully answer the questions we deem "dangerous".
Then 2023 comes in crashing through the door, open source data + algos happen, and bam!
Now I have an AI on my 5800X + 2080Ti, /absolutely unfiltered/, citing and referencing *the* most offensive pieces of taboo research whenever I ask it to. It's stuff that could never, ever be censored now, all available on torrents to deploy from, and eager to start spitting tokens, once it's fully mmap()ped. LOL.
In retrospect, the hermetic cult of critical consciousness conclusively wasted effort trying to ensure AI was only available under *their* terms. That waste was entirely foreseeable too. The cat is out of the bag now.
We won. The Timnits / Big Yuds / Microsofts of this century lost. I love the future. And the future is NOW.
😍
