|
|
Log in / Subscribe / Register

Well this is interesting

Well this is interesting

Posted Mar 2, 2026 17:52 UTC (Mon) by roc (subscriber, #30627)
In reply to: Well this is interesting by jpeisach
Parent article: Gram 1.0 released

> it's self hosted because I at least try to care about the environment

Are you producing your own renewable electricity via rooftop solar or somesuch?

If not, most likely AI running in datacentres is more power-efficient than you running it locally.


to post comments

Well this is interesting

Posted Mar 2, 2026 18:31 UTC (Mon) by valderman (subscriber, #56479) [Link] (8 responses)

Unlikely, since the models running in data centres are an order of magnitude larger (and thus more energy consuming) than the ones you can run on most consumer GPUs.

Well this is interesting

Posted Mar 3, 2026 6:32 UTC (Tue) by roc (subscriber, #30627) [Link] (7 responses)

In thinking mode, weaker models generally require more tokens to get to the same result, sometimes a lot more, so can end up being more expensive. Also, best to not make assumptions about the size or (in)efficiency of models running in big-tech datacenters; you might be surprised. As Cyberax said below, there are huge efficiency gains running things at large scale and sharing hardware across users.

It is at least *not guaranteed* that running a model locally helps the environment. Unless, as I said, you're powering it with your own renewable electricity, in which case good for you.

Well this is interesting

Posted Mar 3, 2026 12:47 UTC (Tue) by jpeisach (subscriber, #181966) [Link]

Huh. I see.

Well, I do feel like Ollama is the best option anyway because otherwise you have to start dealing with subscriptions, limits, and third parties.

Well this is interesting

Posted Mar 5, 2026 10:51 UTC (Thu) by lproven (guest, #110432) [Link] (5 responses)

> It is at least *not guaranteed* that running a model locally helps the environment.

I feel someone needs to point out that the most ecologically-friendly model is to run it in your head, and not use any LLMs at all ever.

This is not some way-out position. The pro-LLM lobby is loud and pervasive, but what I hear from readers of the Register and other techies is widespread profound LLM scepticism. I have no use for the things at all. There is nothing they can do for me that I can't do better myself.

So, "no LLLs" is my own personal policy. The *only* one I permit to run on any of my machines is the Firefox local translation feature, and I am increasingly considering replacing Firefox with Waterfox on macOS. I have already done so on Linux.

Well this is interesting

Posted Mar 5, 2026 15:01 UTC (Thu) by dskoll (subscriber, #1630) [Link]

I agree with this position. I do use AI tools locally, but they are not LLMs. Specifically, I use Whisper to transcribe audio to text.

Well this is interesting

Posted Mar 5, 2026 18:42 UTC (Thu) by roc (subscriber, #30627) [Link]

That's fine, but there are some amazing things going on.
https://normalcomputing.com/blog/building-an-open-source-...
is a recent example.

Well this is interesting

Posted Mar 5, 2026 21:49 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> I feel someone needs to point out that the most ecologically-friendly model is to run it in your head, and not use any LLMs at all ever.

A person in the US has about a 50 ton a year CO2 footprint. Are you sure that the wetware LLM is going to be more efficient?

Well this is interesting

Posted Mar 6, 2026 8:59 UTC (Fri) by anton (subscriber, #25547) [Link] (1 responses)

Pretty sure. That person has that CO2 footprint whether he/she uses its wetware or lets it atrophy by using artificial LLMs. But using the articificial LLM is going to have an additional CO2 footprint.

A potential environmental benefit of using your wetware may be that thinking about the problem may take longer, and you may have less time for activities that harms the environment more than programming does.

Well this is interesting

Posted Mar 6, 2026 15:07 UTC (Fri) by Wol (subscriber, #4433) [Link]

The other massive advantage is you may have an answer that is provably correct and reproducible, rather than having to repeatedly run an expensive project that says "this answer is probably correct". Yes, even if you're trying to solve the Travelling Salesman on a daily basis (and I am).

Cheers,
Wol

Measurable

Posted Mar 4, 2026 14:03 UTC (Wed) by gmatht (subscriber, #58961) [Link] (2 responses)

It is nice to be able to personally check the amount of power being used. With various people claiming that using AGI is destroying the world it is reassuring to know that your AGI is using way less power than a gamer (let alone Aircon, automobiles etc.).

Measurable

Posted Mar 4, 2026 15:30 UTC (Wed) by excors (subscriber, #95769) [Link] (1 responses)

Those measurements will be ignoring issues like the exponentially increasing training cost - a couple of years ago Anthropic's CEO said it then cost $100M to train a model but would soon reach $10B, and maybe $100B [1]. This year they reportedly expect to spend $7B on inference for paying users (not counting free users) and $12B on training [2], for an overall loss of about $11B. They're expecting costs will keep increasing, and they're just one of several companies developing large models. That's quite a lot of GPUs and electricity.

One report estimated data centers will increase from 4.4% of total US electricity usage in 2023, to 6.7-12% by 2028 [3], which is not an insignificant fraction and is about level with residential plus commercial air conditioning [4].

(I don't think energy usage is the strongest criticism of generative AI, but it's not a trivial issue.)

[1] https://www.businessinsider.com/anthropic-ceo-cost-10-bil...
[2] https://www.theinformation.com/articles/anthropic-hikes-2...
[3] https://www.energy.gov/articles/doe-releases-new-report-e...
[4] https://www.eia.gov/tools/faqs/faq.php?id=1174&t=1

Measurable

Posted Mar 6, 2026 2:10 UTC (Fri) by gmatht (subscriber, #58961) [Link]

Well, these don't affect the marginal cost of me running AGI on my GPU. I doubt that they are spending billions on electricity just so I can download a gratis model for Ollama.

Well this is interesting

Posted Mar 4, 2026 14:39 UTC (Wed) by jch (guest, #51929) [Link] (2 responses)

>> it's self hosted because I at least try to care about the environment

> Are you producing your own renewable electricity via rooftop solar or somesuch?

It's winter in the Northern hemisphere. He's heating his flat with his GPU, and compensating by turning down his heating by the same amount.

Well this is interesting

Posted Mar 4, 2026 15:38 UTC (Wed) by paulj (subscriber, #341) [Link]

Right... Winter you can offset heating costs. Spring to Autumn you can use excess solar power.

Summer in Southern Hemisphere

Posted Mar 6, 2026 2:12 UTC (Fri) by gmatht (subscriber, #58961) [Link]

I have a 5kw inverter. Using 1kw for aircon and 1kw for everything else still leaves plenty of power for my old 2070M (even assuming LLM brought it close to its 45W TDP).


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds