Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Posted Apr 22, 2024 8:18 UTC (Mon) by atnot (subscriber, #124910)In reply to: Gentoo bans AI-created contributions by Cyberax
Parent article: Gentoo bans AI-created contributions
"Will" is a word that means "doesn't". No matter how optimistic you are about the outlook, the facts are they're highly unprofitable right now. We don't know much about how big the operating vs r&d costs actually are, nor do we know how fruitful any of those r&d efforts might actually be. We do know that in the here and now, the cost of these things, financial and societal, is astronomical and that the actual value is minor. We know that for more than a year, we haven't seen any glimpses of the previous exponential improvements[1]. Especially given the track record of the tech industry in the last decade, just taking the boosters at their word here seems extremely foolish.
[1] https://garymarcus.substack.com/p/evidence-that-llms-are-...
Posted Apr 22, 2024 17:53 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
OpenAI has actual income from actual paying customers. They can just stop the R&D, and they'll be hugely profitable (for a while).
> In an extremely speculative market that has been recognized even by proponents as probably a bubble.
It's speculative, but not in the sense you're thinking about. Nobody seriously doubts that the AI is here to stay, and that it's going to be hugely impactful. However, nobody also knows who is going to win the AI race. So every VC is making tons of bets, resulting in a somewhat frenzied environment.
They fully expect 99+% of their investments to go up in smoke, but if they make a successful bet, that remaining 1% will recoup the losses.
Posted Apr 22, 2024 22:09 UTC (Mon)
by atnot (subscriber, #124910)
[Link] (5 responses)
Can they?
For one, as I said, we don't know how much of their losses are from operating their service vs research. But I am going to go out on a limb and say that if even just a single service was operating at a profit, they would have been very eager to tell us. That would, to my knowledge, be a first for any genAI offering and cement them as the clear leaders in the industry. It would also help reinforce the idea that AI is going to be incredibly profitable as that comes under fire. However they seem weirdly coy about their operating figures somehow.
For two, we don't know what those "paying customers" are actually doing. There's been a whole lot of demos and webtoys and "experimenting" and "trials" and pass-through APIs that hastily paste your parameters into some pre-written prompt. But there's been remarkably little actual success stories and useful business applications for it. I couldn't find any company stating they made a profit *using* AI either. I don't think they could stop r&d if they wanted to, because all of these companies testing out chatgpt solutions to problems that it can't actually solve are doing so based on the idea that these things soon enough will be able to do anything. The visible r&d spending is crucial to that.
I found it remarkable how in a recent article I read, the Washington Post, despite clearly trying very hard, couldn't scrounge up anyone to balance the article with some positive news who didn't have to resort to constructs like "I think we will see" and "my expectations are" (https://archive.is/pIOra). It's almost funny how consistently all of the bad news is in present tense and the good news are in the future tense. The constant excuses how about how "it's just the early days" and "we'll see applications for it any days now" will also seem very familiar to anyone who has followed e.g. cryptocurrency news[1].
It may just be that they are earning a lot of money by teaching customers their tech isn't useful for them. The one thing I can find reliably is stories such as an animation company hiring a bunch of "prompt engineers" instead of artists to paint background mattes, failing at making minor revisions of the work and not understanding how animation works and getting canned. This great success, of course, will up (4x) in their next annualized revenue figures :)
[1] Which does bring up a fun comparison: local divisions of McDonals and Hershey's did certainly pay a few thousand dollars (or rather, paid a marketing agency to pay a few thousand dollars) for a "crypto experience" on some crypto "metaverse" platform I don't remember the name of at the height of the crypto/metaverse bubble. A bunch of brands minted NFTs too. However that money was, unsurprisingly, not the start of enormous year-on-year brand spending on the blockchain people claimed it was.
Posted Apr 22, 2024 22:17 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Of course. Pure model running is already highly profitable on general-purpose hardware at the prices that OpenAI charges. And it's going to be even better on special-purpose hardware.
> For two, we don't know what those "paying customers" are actually doing.
OpenAI certainly does. And a lot of customers are using ChatGPT on their own. I know non-native English speakers who are using OpenAI to correct spelling mistakes in emails. Or business analytics people using ChatGPT to write Python scripts to do data queries for data in Google Sheets. A very common use is to create TLDR versions of news articles and books.
I'm using ChatGPT to filter emails that are just CC-ed to me, but that don't need my personal attention, and then do a daily summary.
Are these ground-breaking mega-AI use-cases? Not really. But they are highly useful.
Posted Apr 22, 2024 23:16 UTC (Mon)
by atnot (subscriber, #124910)
[Link] (3 responses)
If the answer is supposed to be "corporations" then, well, they can afford the true cost but don't have any worthwhile uses for it. If it's individuals, then sure, those may have some minor uses for it, but wouldn't pay the cost. And the end result is an overhyped technology that's just not useful to anyone unless we assume an endless chain of VCs pumping it forever, putting aside the immense environmental and societal costs for now.
Posted Apr 23, 2024 0:27 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
With ChatGPT you pay directly for your use (on a literal per-character basis). This cost easily covers the model runtime cost. Why is it not cost-effective?
> If it's individuals, then sure, those may have some minor uses for it, but wouldn't pay the cost.
Now you're making assumptions. Why do you think regular people won't use AI once it becomes more user-friendly?
Posted Apr 23, 2024 7:48 UTC (Tue)
by atnot (subscriber, #124910)
[Link] (1 responses)
Because, as pointed out, that's not true. The current pricing is subsidized under the assumption that a) the models will rapidly become obsolete anyway b) the lasting market share advantage will offset the losses. We don't know this for certain with OpenAI except by omission, but we know it for other offerings by more public companies that have nearly identical costs and pricing, e.g. Microsoft.
I think you may be underestimating how resource hungry these things are. Consider the person in another thread here who said they needed two 3090 GPUs to get acceptable output speed for programming. That's $2500 upfront and nearly 1kW continuous power draw just for some autocomplete. Datacenter inference systems will be somewhat more efficient, but the scale of hardware needed to perform these queries is just bonkers.
> once it becomes more user-friendly
You've answered your own question :) It's hard to imagine a more use friendly interface than a text chat, but there is currently no clear route to improvements there either. The so-called hallucination is just inherent and can not be solved. That would require a system where facts are first-class citizens instead of just crossing your fingers and hoping they are statistically likely. As recently shown this type of model also requires exponential increases in training data for linear increases in capability and we're already out of public data to train them on. It is generally questionable whether we can get much better than this by predicting tokens.
So if we're stuck with approximately what we have now, and we know the real costs are many times what the billing prices are, I think it'll be hard to find end-users who consider that a worthwhile investment.
Posted Apr 23, 2024 17:30 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
No, it's not. I know financials of a couple of a small AI company, and model running is expensive, but it can be done profitably. It's not feasible if you're doing any of the ad-funded "user is the product" crap, but it's doable if your customers actually pay you.
> I think you may be underestimating how resource hungry these things are. Consider the person in another thread here who said they needed two 3090 GPUs to get acceptable output speed for programming. That's $2500 upfront and nearly 1kW continuous power draw just for some autocomplete. Datacenter inference systems will be somewhat more efficient, but the scale of hardware needed to perform these queries is just bonkers.
The power draw is not continuous, you only need to do computation when you're doing a query. This takes in total maybe for a minute or so within an hour. In the OpenAI case, a single hardware node is shared across multiple customers.
The main cost that is not covered is R&D (model training and engineering salary).
> You've answered your own question :) It's hard to imagine a more use friendly interface than a text chat, but there is currently no clear route to improvements there either.
Chat is not great for UI systems, actually. It's too low-level, and you need to do context imports periodically. Just as with any other service, you need application support in many cases. My email classifier is a bunch of scripts that run on my home server, and it's certainly not a good general-purpose solution.
> The so-called hallucination is just inherent and can not be solved.
There are thousands of very smart people working on solving it. I'm pretty sure they'll think of something that will be good enough for practical purposes.
Posted Apr 28, 2024 5:46 UTC (Sun)
by ssmith32 (subscriber, #72404)
[Link] (3 responses)
But this
>Nobody seriously doubts that the AI is here to stay, and that it's going to be hugely impactful.
I can 100% say is 100% false.
At least one person, who has received a graduate level degree in compuer science, and has worked in the industry for more years than I care to say, does not think it will be hugely impactful.
And I know others that have at least voiced similar cynicism. Including some with graduate specializations directly in the field of CNNs/autoencoders/etc.
So, yeah, enjoy the ride. Slightly better auto-complete is nice, but hugely impactful, it is not. Oh, and remember the self-driving taxis? Still waiting...
Posted Apr 28, 2024 6:10 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
I meant VCs. Also, you should examine yourself for religious fervor.
> Oh, and remember the self-driving taxis? Still waiting...
If I had seen your reply earlier today, I would have written the reply from a self-driving taxi. Waymo exists, and it provides service in SF. It's also slowly expanding its service area.
Pretty much all new advances follow the https://en.wikipedia.org/wiki/Gartner_hype_cycle . The self-driving cars are in the trough of disillusionment, and are slowly climbing to the plateau of productivity.
Posted Apr 28, 2024 11:31 UTC (Sun)
by Wol (subscriber, #4433)
[Link] (1 responses)
? >Nobody seriously doubts that the AI is here to stay, and that it's going to be hugely impactful.
> I can 100% say is 100% false.
I think you're reading this all wrong. I think it's going to be hugely impactful - and NOT in a good way.
As so often, the mathematicians (namely the guys writing all this software) seem to think that mathematics dictates the way the world behaves, not describes it. They're busily disappearing into an alternative universe, the problem being that they're trying to force everyone else to live in it ...
Cheers,
Posted Apr 29, 2024 10:37 UTC (Mon)
by NAR (subscriber, #1313)
[Link]
I remember the good old days of late-1990s when on Linux-related mailing lists we were "competing" on how much Nigerian scam e-mails we were receiving in a month. There were separate competitions for the number of offers and for the sum value. During this we were sure that the "general population" was safe from this scam as most people didn't speak English and these were obvious scams. Today the most exposed population still doesn't speak English, but the scammers can generate good enough Hungarian text (on the level of an uneducated native speaker) that can easily fool them and they do fall for scams...
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Gentoo bans AI-created contributions
Wol
I think it's going to be hugely impactful - and NOT in a good way.
Gentoo bans AI-created contributions