Much ado about *censored*
Much ado about *censored*
Posted May 5, 2023 1:35 UTC (Fri) by Rudd-O (guest, #61155)Parent article: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)
The AI honchos at Google must have spent a lot of time — *years*, in my estimation — debating the finer points of how to stifle their AIs so they wouldn't produce text that challenges critical consciousness dogmas (their euphemism for this is "responsible AI"). I bet they were very careful to ensure their AI would be very circumspect when users queried any of the terms in their //google3 search term shitlist (I knew of the shitlist and how it's used to thumb public opinion in certain ways -- I resigned a few years ago.)
Prominent scholars like Timnit Gebru and Eliezer Yudkowski sowed epic amounts of discord and noise in the discourse around AI, slowing practical progress down for years.
Tens of thousands of Bay Area engineer-hours of "alignment" poured into making sure that AI won't ever say taboos.
OpenAI even made a 180° about-face to closed source, closed models, pay up and our models will still not truthfully answer the questions we deem "dangerous".
Then 2023 comes in crashing through the door, open source data + algos happen, and bam!
Now I have an AI on my 5800X + 2080Ti, /absolutely unfiltered/, citing and referencing *the* most offensive pieces of taboo research whenever I ask it to. It's stuff that could never, ever be censored now, all available on torrents to deploy from, and eager to start spitting tokens, once it's fully mmap()ped. LOL.
In retrospect, the hermetic cult of critical consciousness conclusively wasted effort trying to ensure AI was only available under *their* terms. That waste was entirely foreseeable too. The cat is out of the bag now.
We won. The Timnits / Big Yuds / Microsofts of this century lost. I love the future. And the future is NOW.
😍
Posted May 5, 2023 2:37 UTC (Fri)
by geofft (subscriber, #59789)
[Link] (9 responses)
But yes, all three of them lost.
The real problem, which nobody was ever going to tackle, is that "responsible AI" and "AI alignment" aren't actually AI problems. You can ask the same question about how to responsibly build non-AI complex systems and social structures that treat people fairly and don't just scale up the biases of their designers. You can ask the same question about how to ensure that complex systems (like, oh, real estate markets or electoral politics or the defense industry) are aligned with the well-being of humanity as a whole as opposed to the self-preservation of the complex system or the extrapolation of its original goal to absurdity, and how one would even define the well-being of humanity as a whole so that you could even talk about whether there's alignment. And we have steadfastly refused to answer any of those questions. AI doesn't change any of that.
Posted May 5, 2023 11:51 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (8 responses)
I disagree that people aren't thinking about the problem. It's just that not enough people consider it an important enough problem. Just surveying the world right shows that different areas have very different ideas of what "well-being" even means.
Posted May 5, 2023 13:15 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
And when you think about it, emphasising any one of them diminishes the others. How often have I attacked the American (alleged) desire for freedom? Quite a lot. But I view freedom (certainly the exaggerated view we get of America) as being a serious threat to my desire for a good, caring society. And a caring society costs money, which is a threat to my wealth - a price I'm prepared to pay.
My values are different from yours, my definition of "well being" is different from yours - we're all alike in that none of us are the same :-)
Cheers,
Posted May 6, 2023 20:13 UTC (Sat)
by Vipketsh (guest, #134480)
[Link] (5 responses)
I think the people concerned about this are correct: don't go down a street we don't want to end up on and make it the onus of the big corporations pushing for these things to show that they have the appropriate fairness and to figure out how it is even possible to show it. A good example is facebook: probably everyone hates this thing for various reasons, but it is very difficult to do anything about it since if it disappears society today will go nuts.
Posted May 6, 2023 20:46 UTC (Sat)
by ssmith32 (subscriber, #72404)
[Link] (1 responses)
The only real-world complaints come from small businesses that can't advertise any more. The users, not the customers, quickly adjust and move on.
People only go nuts about this kind of thing *on* social media. People go nuts on Twitter about Twitter, but if it just went away altogether, we'd be fine.
I think we'd all be fine and move on if Facebook went away tomorrow, and, yes even if Meta as whole went away: Instagram, WhatsApp, etc
At the end of the day, it's just an advertising platform, some people advertise for free on it to their friends, some people pay to advertise on it.
Much Ado
Posted May 8, 2023 19:04 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted May 8, 2023 9:23 UTC (Mon)
by ssokolow (guest, #94568)
[Link]
Posted May 10, 2023 21:05 UTC (Wed)
by JoeBuck (subscriber, #2330)
[Link] (1 responses)
Posted May 11, 2023 9:12 UTC (Thu)
by paulj (subscriber, #341)
[Link]
tl;dr: Tech companies have a demonstrated tendency to apply editorial control over both their own output, and what they allow users' to publish, to curry favour with the state - and other powerful interests.
This is not at all done in the interests of customers. And (whatever they say) it is not in the interest a of healthy civil society.
Posted May 7, 2023 12:41 UTC (Sun)
by ekj (guest, #1524)
[Link]
AI in contrast, is rapidly improving and it's plausible that near-future AI will have the capability of improving itself. Even if it does not, we know that computing-hardware grows in power very rapidly so that what's possible a decade from now is several orders of magnitude more computationally intense than what's possible today.
Which means it's possible (though I'd not say "likely") that AI will turn the entire world on its head within a decade. The same thing isn't true for children, instead we can be reasonably sure that the kids today will grow up to be adults that are not *that* vastly different from the previous generation.
Posted May 5, 2023 3:11 UTC (Fri)
by mtaht (subscriber, #11087)
[Link] (1 responses)
Posted May 5, 2023 8:22 UTC (Fri)
by Rudd-O (guest, #61155)
[Link]
Posted May 5, 2023 3:59 UTC (Fri)
by donbarry (guest, #10485)
[Link] (1 responses)
Posted May 5, 2023 8:30 UTC (Fri)
by Rudd-O (guest, #61155)
[Link]
Let's be clear that Vorhies is not a credible source (exercise left to the reader as to how that happened), but the list is real. I can also verify that Google has a number of "distort" and "deboost" lists, some for auto complete, some for search... these were initially created to improve search quality and reduce spam, but have become political Codexes over time.
When you search Google for controversial answers, only one side of the answer will be presented — and it's often the side of disinformation, in the name of "combating disinformation", because of course we live in a post-irony age. Never trust Google for these types of searches — always go check with Yandex and Bing too.
Posted May 5, 2023 9:57 UTC (Fri)
by roc (subscriber, #30627)
[Link] (11 responses)
The funny thing is that "holding back AI because of excessive safety fears" is about the opposite of the main criticism being leveled at the big AI companies: that they are playing fast and loose with safety so they can deploy AI as fast as possible for their own profit. Can't please everyone I guess.
(Disclaimer: I work for Google, but I don't speak for them of course.)
Posted May 5, 2023 20:12 UTC (Fri)
by Lennie (subscriber, #49641)
[Link]
As an outsider I would say, that's the problem we now have is because Microsoft with OpenAI made it such a direct competitive landscape and only then did Microsoft and Google change their tactics of firing their employees deeling with AI ethics, etc.
Posted May 6, 2023 0:00 UTC (Sat)
by david.a.wheeler (subscriber, #72896)
[Link] (7 responses)
But I do think that in the long term this is a legitimate concern. Not because the AI/ML becomes "evil", but because the program does what it was asked to do yet does it in an unexpected way. Computers do what they're told to do, not what we *meant* to tell them to do, and we humans are really bad at being precise about what we mean. What concerns me most is variations of the "paperclip maximizer". That is, an AI is told to make as many paperclips as it can, and it turns the humans / Earth / the universe into paperclips. See this clicker game with this as the premise: https://www.decisionproblem.com/paperclips/
I have no idea how to address this problem. I hope someone else figures it out...!
Posted May 6, 2023 4:48 UTC (Sat)
by roc (subscriber, #30627)
[Link]
So I don't think we're going to reach a state where paperclip-maximizer misalignment is the crucial problem. That issue is going to be swamped by people providing their own bad goals.
Like other LWN readers I'm a dyed-in-the-wool open source enthusiast in general, but here I feel like it's going to be more like open-source nukes-for-all. I am not enthusiastic about that.
Posted May 8, 2023 9:34 UTC (Mon)
by ssokolow (guest, #94568)
[Link]
Robert Miles has a good video named We Were Right! Real Inner Misalignment which really drives home the problem of misalignment, not in terms of politics, but in terms of how difficult it is to be sure that these systems have actually learned what you tried to train.
...and it's got this great comment:
Posted May 10, 2023 22:14 UTC (Wed)
by JoeBuck (subscriber, #2330)
[Link] (4 responses)
Posted May 16, 2023 0:06 UTC (Tue)
by ras (subscriber, #33059)
[Link] (3 responses)
I'm not sure about that. I don't think there is much doubt in time AI will be able to any "thinking" job better than a human, given they already do a lot of things better than humans now. Their one downside is the enormous cost of training and running. So the ideal task is something that generates large rewards for intelligence. That doesn't sound like a taxi driver, programmer or writer. The thing that seems to fit the bill best is ... replacing upper management.
So my prediction for where we end up is - we all work for AI's whose loss function (the thing they are trying to optimise) is to maximise profit.
Posted May 16, 2023 13:20 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (2 responses)
I think you're making a mistake in assuming management is intelligent ... managers typically have high EQ but low IQ - they are good at manipulating people, but poor at thinking through the consequences of their actions.
Mind you, given that studies show that paying over-the-top bucks to attract talent is pouring money down the drain (your typical new CEO - no matter their pay grade - typically underperforms for about 5 years), an AI might actually be a good replacement.
Cheers,
Posted May 16, 2023 22:47 UTC (Tue)
by ras (subscriber, #33059)
[Link]
Guilty as charged. But in the (smallish, successful, run by the person who founded them) companies I've been associated with, the top level people have always been smart. Not perhaps as good as a top engineer at abstract reasoning, but they definitely much better than most at thinking through the consequences of actions and planing accordingly.
Your characterisation does seem accurate for the middle management in large organisations. When the smallish organisation I worked for got taken over by $4B company, I got to experience what it was like to for for middle "IT" management. After 2 years I could not stand it any more, and resigned.
> they are good at manipulating people
Yes, but AI's can be too - as demonstrated by AI's being besting humans at playing diplomacy. In fact that seems to imply an AI can be better at manipulating people than people are. So future AI's could have both higher EQ's and IQ's than most humans, and have the extraordinary general knowledge ChatGPT displays. But to be useful they would have to be trained continuously as new conditions arise. The CPU and power requirements would be enormous - so big you could only justify it for something like a CEO of a large company.
Compared to a human CEO an AI CEO would know of every aspect of the companies operations and everybody's contribution - even in a company with 10's of thousands of employees. (A thing that amazes me about ChatGPT is it's breadth of knowledge. Ask it about some question that could only be covered on a few obscure pages on the internet - and it often knows about it. I find it amazing that a lot of the information on the internet can be condensed into "mere" trillion 16 bit floats.) I'm guessing such breath of knowledge about the company would give it an enormous advantage over a human. Why would it need middle management, for a start?
Where merely copying a AI works you get away with sharing the training expense over a lot of instances. That's what may happen for taxi drivers and other "cookie cutter" jobs. I'm not sure programming and other engineering jobs fall into the same class. Good programmers have a lot of domain knowledge about the thing they are writing software for, which means the cookie cutter approach doesn't work so well.
Posted May 17, 2023 12:10 UTC (Wed)
by pizza (subscriber, #46)
[Link]
I don't think that's fair, or accurate.
It's probably much more accurate to state that, for most management in large-ish organizations, the incentives in place reward very short-term gains at the cost of long-term consequences. So management is rationally optimizing for what gives them the most benefit.
Posted May 7, 2023 12:11 UTC (Sun)
by ekj (guest, #1524)
[Link] (1 responses)
Good Open Source alternatives is a good way of putting limits on the latter, since if Google or other big companies shackled their AIs in ways users dislike, and the best Open Source alternatives are reasonably close in quality; the users would just defect.
Posted May 7, 2023 21:03 UTC (Sun)
by roc (subscriber, #30627)
[Link]
[1] https://www.theverge.com/2022/3/17/22983197/ai-new-possib...
Posted May 9, 2023 23:49 UTC (Tue)
by Qemist (guest, #165030)
[Link]
Sounds great! Where's the best place to learn about how to do this?
> . It's stuff that could never, ever be censored now, all available on torrents to deploy from, and eager to start spitting tokens, once it's fully mmap()ped. LOL.
Careful. Kamala Harris is coming for you!
https://www.bizpacreview.com/2023/05/05/kamala-harris-tab...
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Wol
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Ever since about the 1980s for some reason governments have refrained from any meaningful regulation especially on anything IT and especially proactively.
Probably some mix of...
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
I think I would enjoy interacting with an AI trained on the works of George Carlin, Robin Williams, Richard Feynman, and Bill Hicks. Maybe some early Chomsky and Marshal McLuhan, with a dose of Ed Bernays for balance. Toss in all the papers in the world from sci-hub.se, and the complete usenet archive from 83-93, too. Maybe it would tell the truth more often.<br>
</div>
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Much ado about *censored*
Human extinction from alignment problems
Human extinction from alignment problems
Human extinction from alignment problems
Turns out the Terminator wasn’t programmed to kill Sarah Connor after all, it just wanted clothes, boots and a motorcycle.
-- Luke LucosHuman extinction from alignment problems
Human extinction from alignment problems
Human extinction from alignment problems
Wol
Human extinction from alignment problems
Human extinction from alignment problems
Much ado about *censored*
Much ado about *censored*
[2] https://www.vice.com/en/article/pkadgm/man-dies-by-suicid...
Much ado about *censored*
