|
|
Subscribe / Log in / New account

Much ado about *censored*

Much ado about *censored*

Posted May 5, 2023 1:35 UTC (Fri) by Rudd-O (guest, #61155)
Parent article: Google "We Have No Moat, And Neither Does OpenAI" (SemiAnalysis)

So...

The AI honchos at Google must have spent a lot of time — *years*, in my estimation — debating the finer points of how to stifle their AIs so they wouldn't produce text that challenges critical consciousness dogmas (their euphemism for this is "responsible AI"). I bet they were very careful to ensure their AI would be very circumspect when users queried any of the terms in their //google3 search term shitlist (I knew of the shitlist and how it's used to thumb public opinion in certain ways -- I resigned a few years ago.)

Prominent scholars like Timnit Gebru and Eliezer Yudkowski sowed epic amounts of discord and noise in the discourse around AI, slowing practical progress down for years.

Tens of thousands of Bay Area engineer-hours of "alignment" poured into making sure that AI won't ever say taboos.

OpenAI even made a 180° about-face to closed source, closed models, pay up and our models will still not truthfully answer the questions we deem "dangerous".

Then 2023 comes in crashing through the door, open source data + algos happen, and bam!

Now I have an AI on my 5800X + 2080Ti, /absolutely unfiltered/, citing and referencing *the* most offensive pieces of taboo research whenever I ask it to. It's stuff that could never, ever be censored now, all available on torrents to deploy from, and eager to start spitting tokens, once it's fully mmap()ped. LOL.

In retrospect, the hermetic cult of critical consciousness conclusively wasted effort trying to ensure AI was only available under *their* terms. That waste was entirely foreseeable too. The cat is out of the bag now.

We won. The Timnits / Big Yuds / Microsofts of this century lost. I love the future. And the future is NOW.

😍


to post comments

Much ado about *censored*

Posted May 5, 2023 2:37 UTC (Fri) by geofft (subscriber, #59789) [Link] (9 responses)

I think it's pretty clear that "responsible AI" being a euphemism for "AI will not say things that are surface-level unpopular with major American corporations' HR departments" is very different from what either Gebru or Yudkowsky wanted (and the two of them aren't arguing for precisely the same thing, either).

But yes, all three of them lost.

The real problem, which nobody was ever going to tackle, is that "responsible AI" and "AI alignment" aren't actually AI problems. You can ask the same question about how to responsibly build non-AI complex systems and social structures that treat people fairly and don't just scale up the biases of their designers. You can ask the same question about how to ensure that complex systems (like, oh, real estate markets or electoral politics or the defense industry) are aligned with the well-being of humanity as a whole as opposed to the self-preservation of the complex system or the extrapolation of its original goal to absurdity, and how one would even define the well-being of humanity as a whole so that you could even talk about whether there's alignment. And we have steadfastly refused to answer any of those questions. AI doesn't change any of that.

Much ado about *censored*

Posted May 5, 2023 11:51 UTC (Fri) by kleptog (subscriber, #1183) [Link] (8 responses)

This is exactly the same as the problem: how do we raise children not to be arseholes? As a society we have all sorts of mechanisms to overtly or subtly correct bad behaviour by other people. It's not at all clear how one can correct the bad behaviour by an AI and it's probably completely impossible to create AI that cannot be corrupted. We haven't figured it out for humans either.

I disagree that people aren't thinking about the problem. It's just that not enough people consider it an important enough problem. Just surveying the world right shows that different areas have very different ideas of what "well-being" even means.

Much ado about *censored*

Posted May 5, 2023 13:15 UTC (Fri) by Wol (subscriber, #4433) [Link]

As I keep on saying, what people want is "you can have two, any two, of three". What people want can pretty much be boiled down to freedom, wealth, and society.

And when you think about it, emphasising any one of them diminishes the others. How often have I attacked the American (alleged) desire for freedom? Quite a lot. But I view freedom (certainly the exaggerated view we get of America) as being a serious threat to my desire for a good, caring society. And a caring society costs money, which is a threat to my wealth - a price I'm prepared to pay.

My values are different from yours, my definition of "well being" is different from yours - we're all alike in that none of us are the same :-)

Cheers,
Wol

Much ado about *censored*

Posted May 6, 2023 20:13 UTC (Sat) by Vipketsh (guest, #134480) [Link] (5 responses)

I don't think it's so much that people aren't thinking about this problem it's more about "how to force companies to prioritise these issues ahead of profit". Ever since about the 1980s for some reason governments have refrained from any meaningful regulation especially on anything IT and especially proactively. So I think that people are rightfully worried that soon they will be pitted against AI to do anything (e.g. you want your broken phone repaired ? Good luck convincing an AI that you didn't purposefully break it when that AI is programmed to never be convinced).

I think the people concerned about this are correct: don't go down a street we don't want to end up on and make it the onus of the big corporations pushing for these things to show that they have the appropriate fairness and to figure out how it is even possible to show it. A good example is facebook: probably everyone hates this thing for various reasons, but it is very difficult to do anything about it since if it disappears society today will go nuts.

Much ado about *censored*

Posted May 6, 2023 20:46 UTC (Sat) by ssmith32 (subscriber, #72404) [Link] (1 responses)

I'm not really sure society would go nuts if Facebook went away. See Twitter. Even TikTok bans are informative. Or people leaving Facebook for TikTok, etc.

The only real-world complaints come from small businesses that can't advertise any more. The users, not the customers, quickly adjust and move on.

People only go nuts about this kind of thing *on* social media. People go nuts on Twitter about Twitter, but if it just went away altogether, we'd be fine.

I think we'd all be fine and move on if Facebook went away tomorrow, and, yes even if Meta as whole went away: Instagram, WhatsApp, etc

At the end of the day, it's just an advertising platform, some people advertise for free on it to their friends, some people pay to advertise on it.

Much Ado

Much ado about *censored*

Posted May 8, 2023 19:04 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

While I'm sure things would be "fine" in the long-run, WhatsApp going away would leave a *lot* of people without ways of communicating with family overseas since that is a nice way to get free world-wide video and voice calls. The rest would cause some disruption, but nothing on the level of WhatsApp going away. (I've no lost love for the thing, but I also cannot deny its usefulness).

Much ado about *censored*

Posted May 8, 2023 9:23 UTC (Mon) by ssokolow (guest, #94568) [Link]

Ever since about the 1980s for some reason governments have refrained from any meaningful regulation especially on anything IT and especially proactively.
Probably some mix of...
  1. The 1976 Buckley v. Valeo and 1978 First National Bank of Boston v. Bellotti cases effectively legalized bribing politicians in the U.S.
  2. The U.S. has a lot of influence
  3. Politicians lean toward being out-of-touch dinosaurs on technical subjects, giving lobbyists in any country more room to skew their perception without them realizing it.

Much ado about *censored*

Posted May 10, 2023 21:05 UTC (Wed) by JoeBuck (subscriber, #2330) [Link] (1 responses)

Companies aren't interested in preventing their models from spewing offensive garbage because they care about "wokeness" more than profit. Their motivation is precisely that they care about profit. Offending most of your customers isn't profitable.

Much ado about *censored*

Posted May 11, 2023 9:12 UTC (Thu) by paulj (subscriber, #341) [Link]

I think it's a bit more nuanced than that. It isn't just customers they don't want to offend, they also may wish to curry favour with powerful interests. E.g., the state, because the tech companies wish to keep in the good books of the state so as to either obtain or avoid regulations that are or are not in their self-interest. So tech companies may well censor material that goes against the government lines - be that political speech or even objective scientific factual material. This may happen at scales that it has a significant distorting effect on public discourse, and so affects important public policy (either by changing what is implemented, or delaying required change).

tl;dr: Tech companies have a demonstrated tendency to apply editorial control over both their own output, and what they allow users' to publish, to curry favour with the state - and other powerful interests.

This is not at all done in the interests of customers. And (whatever they say) it is not in the interest a of healthy civil society.

Much ado about *censored*

Posted May 7, 2023 12:41 UTC (Sun) by ekj (guest, #1524) [Link]

I don't think it's exactly the same as that. We have millenia worth of experience with children, and we know that their capabilities as adults are going to be roughly comparable to the previous generation, and that this stuff moves on a glacial pace; a single generation is 2-3 decades long.

AI in contrast, is rapidly improving and it's plausible that near-future AI will have the capability of improving itself. Even if it does not, we know that computing-hardware grows in power very rapidly so that what's possible a decade from now is several orders of magnitude more computationally intense than what's possible today.

Which means it's possible (though I'd not say "likely") that AI will turn the entire world on its head within a decade. The same thing isn't true for children, instead we can be reasonably sure that the kids today will grow up to be adults that are not *that* vastly different from the previous generation.

Much ado about *censored*

Posted May 5, 2023 3:11 UTC (Fri) by mtaht (subscriber, #11087) [Link] (1 responses)

<div class="FormattedComment">
I think I would enjoy interacting with an AI trained on the works of George Carlin, Robin Williams, Richard Feynman, and Bill Hicks. Maybe some early Chomsky and Marshal McLuhan, with a dose of Ed Bernays for balance. Toss in all the papers in the world from sci-hub.se, and the complete usenet archive from 83-93, too. Maybe it would tell the truth more often.<br>
</div>

Much ado about *censored*

Posted May 5, 2023 8:22 UTC (Fri) by Rudd-O (guest, #61155) [Link]

Dang, that sounds like an excellent idea I would enjoy too! I wonder if a LoRA can be put together exactly with that content as finetuning.

Much ado about *censored*

Posted May 5, 2023 3:59 UTC (Fri) by donbarry (guest, #10485) [Link] (1 responses)

I'd be very appreciative if you might elaborate on the //google3 search term "shitlist" -- while I'm not at all surprised that such shaping exists, I'm not aware of more specific information available online, and I'm much interested in finding sources on it. Thanks for your contribution.

Much ado about *censored*

Posted May 5, 2023 8:30 UTC (Fri) by Rudd-O (guest, #61155) [Link]

Sure. E.g. https://reclaimthenet.org/google-blacklists-leak speaks of the news blacklist. The news blacklist, in turn, also has effects on what news content is surfaced in both news searches, and "newsy" searches on front page or mobile.

Let's be clear that Vorhies is not a credible source (exercise left to the reader as to how that happened), but the list is real. I can also verify that Google has a number of "distort" and "deboost" lists, some for auto complete, some for search... these were initially created to improve search quality and reduce spam, but have become political Codexes over time.

When you search Google for controversial answers, only one side of the answer will be presented — and it's often the side of disinformation, in the name of "combating disinformation", because of course we live in a post-irony age. Never trust Google for these types of searches — always go check with Yandex and Bing too.

Much ado about *censored*

Posted May 5, 2023 9:57 UTC (Fri) by roc (subscriber, #30627) [Link] (11 responses)

It's far-fetched to accuse Gebru and Yudkowsky and Bengio and Hinton and Musk and the 50% of AI researchers who believe there's a >10% chance of AI extinguishing humanity of all belonging to the same cult. They don't all have the same set of concerns, but they're all reasonable in different ways, and blithely dismissing those concerns will age very poorly indeed.

The funny thing is that "holding back AI because of excessive safety fears" is about the opposite of the main criticism being leveled at the big AI companies: that they are playing fast and loose with safety so they can deploy AI as fast as possible for their own profit. Can't please everyone I guess.

(Disclaimer: I work for Google, but I don't speak for them of course.)

Much ado about *censored*

Posted May 5, 2023 20:12 UTC (Fri) by Lennie (subscriber, #49641) [Link]

"The funny thing is that "holding back AI because of excessive safety fears" is about the opposite of the main criticism being leveled at the big AI companies: that they are playing fast and loose with safety so they can deploy AI as fast as possible for their own profit. Can't please everyone I guess."

As an outsider I would say, that's the problem we now have is because Microsoft with OpenAI made it such a direct competitive landscape and only then did Microsoft and Google change their tactics of firing their employees deeling with AI ethics, etc.

Human extinction from alignment problems

Posted May 6, 2023 0:00 UTC (Sat) by david.a.wheeler (subscriber, #72896) [Link] (7 responses)

The *current* crop of AI/ML won't lead to human extinction due to lack of alignment.

But I do think that in the long term this is a legitimate concern. Not because the AI/ML becomes "evil", but because the program does what it was asked to do yet does it in an unexpected way. Computers do what they're told to do, not what we *meant* to tell them to do, and we humans are really bad at being precise about what we mean. What concerns me most is variations of the "paperclip maximizer". That is, an AI is told to make as many paperclips as it can, and it turns the humans / Earth / the universe into paperclips. See this clicker game with this as the premise: https://www.decisionproblem.com/paperclips/

I have no idea how to address this problem. I hope someone else figures it out...!

Human extinction from alignment problems

Posted May 6, 2023 4:48 UTC (Sat) by roc (subscriber, #30627) [Link]

For quite a long time people like LeCun said "why would an AI want to take over the world or destroy humanity? That's ridiculous." Turns out one answer is "because people will ask it to, for the lulz if for no other reason" --- see ChaosGPT.

So I don't think we're going to reach a state where paperclip-maximizer misalignment is the crucial problem. That issue is going to be swamped by people providing their own bad goals.

Like other LWN readers I'm a dyed-in-the-wool open source enthusiast in general, but here I feel like it's going to be more like open-source nukes-for-all. I am not enthusiastic about that.

Human extinction from alignment problems

Posted May 8, 2023 9:34 UTC (Mon) by ssokolow (guest, #94568) [Link]

Robert Miles has a good video named We Were Right! Real Inner Misalignment which really drives home the problem of misalignment, not in terms of politics, but in terms of how difficult it is to be sure that these systems have actually learned what you tried to train.

...and it's got this great comment:

Turns out the Terminator wasn’t programmed to kill Sarah Connor after all, it just wanted clothes, boots and a motorcycle.
-- Luke Lucos

Human extinction from alignment problems

Posted May 10, 2023 22:14 UTC (Wed) by JoeBuck (subscriber, #2330) [Link] (4 responses)

I don't think that this is the near-term threat. Long before AI is good enough to independently take over the world, it might be good enough that management can fire most of the programmers, writers, artists, and middle management, have AI replace their functions, have a skeleton crew to clean up any problems in what the AI generates, and the stockholders keep all the money.

Human extinction from alignment problems

Posted May 16, 2023 0:06 UTC (Tue) by ras (subscriber, #33059) [Link] (3 responses)

> Long before AI is good enough to independently take over the world, it might be good enough that management can fire most of the programmers, writers, artists, and middle management,

I'm not sure about that. I don't think there is much doubt in time AI will be able to any "thinking" job better than a human, given they already do a lot of things better than humans now. Their one downside is the enormous cost of training and running. So the ideal task is something that generates large rewards for intelligence. That doesn't sound like a taxi driver, programmer or writer. The thing that seems to fit the bill best is ... replacing upper management.

So my prediction for where we end up is - we all work for AI's whose loss function (the thing they are trying to optimise) is to maximise profit.

Human extinction from alignment problems

Posted May 16, 2023 13:20 UTC (Tue) by Wol (subscriber, #4433) [Link] (2 responses)

> So the ideal task is something that generates large rewards for intelligence. That doesn't sound like a taxi driver, programmer or writer. The thing that seems to fit the bill best is ... replacing upper management.

I think you're making a mistake in assuming management is intelligent ... managers typically have high EQ but low IQ - they are good at manipulating people, but poor at thinking through the consequences of their actions.

Mind you, given that studies show that paying over-the-top bucks to attract talent is pouring money down the drain (your typical new CEO - no matter their pay grade - typically underperforms for about 5 years), an AI might actually be a good replacement.

Cheers,
Wol

Human extinction from alignment problems

Posted May 16, 2023 22:47 UTC (Tue) by ras (subscriber, #33059) [Link]

> I think you're making a mistake in assuming management is intelligent

Guilty as charged. But in the (smallish, successful, run by the person who founded them) companies I've been associated with, the top level people have always been smart. Not perhaps as good as a top engineer at abstract reasoning, but they definitely much better than most at thinking through the consequences of actions and planing accordingly.

Your characterisation does seem accurate for the middle management in large organisations. When the smallish organisation I worked for got taken over by $4B company, I got to experience what it was like to for for middle "IT" management. After 2 years I could not stand it any more, and resigned.

> they are good at manipulating people

Yes, but AI's can be too - as demonstrated by AI's being besting humans at playing diplomacy. In fact that seems to imply an AI can be better at manipulating people than people are. So future AI's could have both higher EQ's and IQ's than most humans, and have the extraordinary general knowledge ChatGPT displays. But to be useful they would have to be trained continuously as new conditions arise. The CPU and power requirements would be enormous - so big you could only justify it for something like a CEO of a large company.

Compared to a human CEO an AI CEO would know of every aspect of the companies operations and everybody's contribution - even in a company with 10's of thousands of employees. (A thing that amazes me about ChatGPT is it's breadth of knowledge. Ask it about some question that could only be covered on a few obscure pages on the internet - and it often knows about it. I find it amazing that a lot of the information on the internet can be condensed into "mere" trillion 16 bit floats.) I'm guessing such breath of knowledge about the company would give it an enormous advantage over a human. Why would it need middle management, for a start?

Where merely copying a AI works you get away with sharing the training expense over a lot of instances. That's what may happen for taxi drivers and other "cookie cutter" jobs. I'm not sure programming and other engineering jobs fall into the same class. Good programmers have a lot of domain knowledge about the thing they are writing software for, which means the cookie cutter approach doesn't work so well.

Human extinction from alignment problems

Posted May 17, 2023 12:10 UTC (Wed) by pizza (subscriber, #46) [Link]

> I think you're making a mistake in assuming management is intelligent ... managers typically have high EQ but low IQ - they are good at manipulating people, but poor at thinking through the consequences of their actions.

I don't think that's fair, or accurate.

It's probably much more accurate to state that, for most management in large-ish organizations, the incentives in place reward very short-term gains at the cost of long-term consequences. So management is rationally optimizing for what gives them the most benefit.

Much ado about *censored*

Posted May 7, 2023 12:11 UTC (Sun) by ekj (guest, #1524) [Link] (1 responses)

There's very different ways of "holding back" AIs though. There's trying to reduce the risk of a runaway process of self-improvement that'd potentially very quickly lead to AI being the new master of earth. And then there's ChatGPT giving me boilerplate-nonsens and refusing to produce text on a given topic if a major American corporation judges the topic controversial and therefore potentially harmful to shareholders.

Good Open Source alternatives is a good way of putting limits on the latter, since if Google or other big companies shackled their AIs in ways users dislike, and the best Open Source alternatives are reasonably close in quality; the users would just defect.

Much ado about *censored*

Posted May 7, 2023 21:03 UTC (Sun) by roc (subscriber, #30627) [Link]

There's a lot of stuff in between like "help me hack all the world's computers", "help me make poison" [1], or "help me convince lots of people to give away their money and commit suicide" [2]. Major American corporations judge those topics controversial and want to avoid giving advice on those topics or facilitating them via an AutoGPT-style agentic loop, and these are also potential building blocks for a world-dominating AI. Believe me, it's awkward to be in the position of having to decide where to put guardrails, but "there should be no guardrails" does not sound good to me. Government regulation might be better than corporate self-governance, but a lot of people who fear corporate guardrails aren't much happier about governments.

[1] https://www.theverge.com/2022/3/17/22983197/ai-new-possib...
[2] https://www.vice.com/en/article/pkadgm/man-dies-by-suicid...

Much ado about *censored*

Posted May 9, 2023 23:49 UTC (Tue) by Qemist (guest, #165030) [Link]

> Now I have an AI on my 5800X + 2080Ti, /absolutely unfiltered/, citing and referencing *the* most offensive pieces of taboo research whenever I ask it to.

Sounds great! Where's the best place to learn about how to do this?

> . It's stuff that could never, ever be censored now, all available on torrents to deploy from, and eager to start spitting tokens, once it's fully mmap()ped. LOL.

Careful. Kamala Harris is coming for you!

https://www.bizpacreview.com/2023/05/05/kamala-harris-tab...


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds