|
|
Log in / Subscribe / Register

Gram 1.0 released

Version 1.0 of Gram, an "opinionated fork of the Zed code editor", has been released. Gram removes telemetry, AI features, collaboration features, and more. It adds built-in documentation, support for additional languages, and tab-completion features similar to the Supertab plugin for Vim. The mission statement for the project explains:

At first, I tried to build some other efforts I found online to make Zed work without the AI features just so I could check it out, but didn't manage to get them to work. At some point, the curiosity turned into spite. I became determined to not only get the editor to run without all of the misfeatures, but to make it a full-blown fork of the project. Independent of corporate control, in the spirit of Vim and the late Bram Moolenaar who could have added subscription fees and abusive license agreements had he so wanted, but instead gave his work as a gift to the world and asked only for donations to a good cause close to his heart in return.

This is the result. Feel free to build it and see if it works for you. There is no license agreement or subscription beyond the open source license of the code (GPLv3). It is yours now, to do with as you please.

According to a blog post on the site, the plan for the editor is to diverge from Zed and proceed slowly.



to post comments

I approve of this development

Posted Mar 2, 2026 15:27 UTC (Mon) by valderman (subscriber, #56479) [Link]

If Gram also fixes the "look, we can render text at 120 FPS on ProMotion screens" misfeature that murders battery life on *all* platforms, and the race conditions in the remote development support that causes e.g. the Python plugin to alternate between deleting random lines of code and crashing outright, I'm switching from VSCode immediately.

Well this is interesting

Posted Mar 2, 2026 16:48 UTC (Mon) by jpeisach (subscriber, #181966) [Link] (22 responses)

I use Zed because I actually used Atom (yes, I know, text editor vs. IDE, blah blah blah), and IIRC the developers of Atom helped create Zed - but I personally have not really used the AI features. As a matter of fact I've been trying to see if it's been helpful at all, because I don't want to be in a situation where everybody else is able to use it as an assistant (but not replacing your entire coding ability). That's why I'm actually trying to find AI feature usage with Ollama (so it's self hosted because I at least try to care about the environment) and I've been playing with Zed for this purpose. It's been.. hit and miss. No real "victories" so far, even an attempt to implement a cpufreq driver "get" function was incorrect.

So if anyone has tips to actually getting AI assistance working nicely, let me know.

Now as far Gram: Well, if I find AI to be totally useless, then I will probably just switch to it because it will be more faithful to the editor I once loved.

Well this is interesting

Posted Mar 2, 2026 17:52 UTC (Mon) by roc (subscriber, #30627) [Link] (15 responses)

> it's self hosted because I at least try to care about the environment

Are you producing your own renewable electricity via rooftop solar or somesuch?

If not, most likely AI running in datacentres is more power-efficient than you running it locally.

Well this is interesting

Posted Mar 2, 2026 18:31 UTC (Mon) by valderman (subscriber, #56479) [Link] (8 responses)

Unlikely, since the models running in data centres are an order of magnitude larger (and thus more energy consuming) than the ones you can run on most consumer GPUs.

Well this is interesting

Posted Mar 3, 2026 6:32 UTC (Tue) by roc (subscriber, #30627) [Link] (7 responses)

In thinking mode, weaker models generally require more tokens to get to the same result, sometimes a lot more, so can end up being more expensive. Also, best to not make assumptions about the size or (in)efficiency of models running in big-tech datacenters; you might be surprised. As Cyberax said below, there are huge efficiency gains running things at large scale and sharing hardware across users.

It is at least *not guaranteed* that running a model locally helps the environment. Unless, as I said, you're powering it with your own renewable electricity, in which case good for you.

Well this is interesting

Posted Mar 3, 2026 12:47 UTC (Tue) by jpeisach (subscriber, #181966) [Link]

Huh. I see.

Well, I do feel like Ollama is the best option anyway because otherwise you have to start dealing with subscriptions, limits, and third parties.

Well this is interesting

Posted Mar 5, 2026 10:51 UTC (Thu) by lproven (guest, #110432) [Link] (5 responses)

> It is at least *not guaranteed* that running a model locally helps the environment.

I feel someone needs to point out that the most ecologically-friendly model is to run it in your head, and not use any LLMs at all ever.

This is not some way-out position. The pro-LLM lobby is loud and pervasive, but what I hear from readers of the Register and other techies is widespread profound LLM scepticism. I have no use for the things at all. There is nothing they can do for me that I can't do better myself.

So, "no LLLs" is my own personal policy. The *only* one I permit to run on any of my machines is the Firefox local translation feature, and I am increasingly considering replacing Firefox with Waterfox on macOS. I have already done so on Linux.

Well this is interesting

Posted Mar 5, 2026 15:01 UTC (Thu) by dskoll (subscriber, #1630) [Link]

I agree with this position. I do use AI tools locally, but they are not LLMs. Specifically, I use Whisper to transcribe audio to text.

Well this is interesting

Posted Mar 5, 2026 18:42 UTC (Thu) by roc (subscriber, #30627) [Link]

That's fine, but there are some amazing things going on.
https://normalcomputing.com/blog/building-an-open-source-...
is a recent example.

Well this is interesting

Posted Mar 5, 2026 21:49 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> I feel someone needs to point out that the most ecologically-friendly model is to run it in your head, and not use any LLMs at all ever.

A person in the US has about a 50 ton a year CO2 footprint. Are you sure that the wetware LLM is going to be more efficient?

Well this is interesting

Posted Mar 6, 2026 8:59 UTC (Fri) by anton (subscriber, #25547) [Link] (1 responses)

Pretty sure. That person has that CO2 footprint whether he/she uses its wetware or lets it atrophy by using artificial LLMs. But using the articificial LLM is going to have an additional CO2 footprint.

A potential environmental benefit of using your wetware may be that thinking about the problem may take longer, and you may have less time for activities that harms the environment more than programming does.

Well this is interesting

Posted Mar 6, 2026 15:07 UTC (Fri) by Wol (subscriber, #4433) [Link]

The other massive advantage is you may have an answer that is provably correct and reproducible, rather than having to repeatedly run an expensive project that says "this answer is probably correct". Yes, even if you're trying to solve the Travelling Salesman on a daily basis (and I am).

Cheers,
Wol

Measurable

Posted Mar 4, 2026 14:03 UTC (Wed) by gmatht (subscriber, #58961) [Link] (2 responses)

It is nice to be able to personally check the amount of power being used. With various people claiming that using AGI is destroying the world it is reassuring to know that your AGI is using way less power than a gamer (let alone Aircon, automobiles etc.).

Measurable

Posted Mar 4, 2026 15:30 UTC (Wed) by excors (subscriber, #95769) [Link] (1 responses)

Those measurements will be ignoring issues like the exponentially increasing training cost - a couple of years ago Anthropic's CEO said it then cost $100M to train a model but would soon reach $10B, and maybe $100B [1]. This year they reportedly expect to spend $7B on inference for paying users (not counting free users) and $12B on training [2], for an overall loss of about $11B. They're expecting costs will keep increasing, and they're just one of several companies developing large models. That's quite a lot of GPUs and electricity.

One report estimated data centers will increase from 4.4% of total US electricity usage in 2023, to 6.7-12% by 2028 [3], which is not an insignificant fraction and is about level with residential plus commercial air conditioning [4].

(I don't think energy usage is the strongest criticism of generative AI, but it's not a trivial issue.)

[1] https://www.businessinsider.com/anthropic-ceo-cost-10-bil...
[2] https://www.theinformation.com/articles/anthropic-hikes-2...
[3] https://www.energy.gov/articles/doe-releases-new-report-e...
[4] https://www.eia.gov/tools/faqs/faq.php?id=1174&t=1

Measurable

Posted Mar 6, 2026 2:10 UTC (Fri) by gmatht (subscriber, #58961) [Link]

Well, these don't affect the marginal cost of me running AGI on my GPU. I doubt that they are spending billions on electricity just so I can download a gratis model for Ollama.

Well this is interesting

Posted Mar 4, 2026 14:39 UTC (Wed) by jch (guest, #51929) [Link] (2 responses)

>> it's self hosted because I at least try to care about the environment

> Are you producing your own renewable electricity via rooftop solar or somesuch?

It's winter in the Northern hemisphere. He's heating his flat with his GPU, and compensating by turning down his heating by the same amount.

Well this is interesting

Posted Mar 4, 2026 15:38 UTC (Wed) by paulj (subscriber, #341) [Link]

Right... Winter you can offset heating costs. Spring to Autumn you can use excess solar power.

Summer in Southern Hemisphere

Posted Mar 6, 2026 2:12 UTC (Fri) by gmatht (subscriber, #58961) [Link]

I have a 5kw inverter. Using 1kw for aircon and 1kw for everything else still leaves plenty of power for my old 2070M (even assuming LLM brought it close to its 45W TDP).

Well this is interesting

Posted Mar 2, 2026 19:32 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

You won't save energy by running things locally. The inference hardware in datacenters is shared across multiple users. You'll end up with the model forcing your GPU to stay in the high-power state all the time, resulting in more energy wasted.

Well this is interesting

Posted Mar 3, 2026 22:06 UTC (Tue) by epa (subscriber, #39769) [Link] (4 responses)

I guess if it's winter and you need to heat your home anyway, the energy is not totally wasted.

Well this is interesting

Posted Mar 3, 2026 22:45 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

Sure. But then why aren't you using a heat pump for heating?

I guess the only really advantageous use of CPU-based heating is for hot water heating. I know that there were bitcoin mining rigs integrated with water heaters, but it might be tricky to do for regular computers. Typical ~50C water temperatures don't provide a lot of thermal headroom.

I was idly thinking about using my computer to pre-heat water in an intermediary tank, a 200W computer will heat 1 liter of water by 1 C every 20 seconds. This is more than enough to cover my daily usage, so a larger heater will just need to boost the temperature from whatever the intermediary tanks ends up at to the nominal 55C.

Well this is interesting

Posted Mar 4, 2026 0:30 UTC (Wed) by himi (subscriber, #340) [Link]

Heat output of a single PC is going to be too variable to be useful for anything other than basic space heating - your outlet temperatures won't get high enough to be useful unless it's running flat out for an extended period of time, the average temperatures will be so low that they'd only be marginally useful even for heating. Not to mention the challenges of managing the /inlet/ temperatures when you're trying to extract useful heat energy across a fairly wide range of temperatures.

This is another situation where you need to do things in bulk for it to be useful - the heat energy byproduct of a datacentre is going to be far more consistent, and therefore much easier to work with even if the actual outlet temperatures aren't really high.

Well this is interesting

Posted Mar 6, 2026 15:22 UTC (Fri) by smoogen (subscriber, #97) [Link] (1 responses)

Some of the 'this is what your datacentre will need to supply if you want to use our XYZ server' have been "you could heat a Roman bath house". One set was talking about gallons of water per minute per block(*) with an intake temperature of 4C and outtake temperature of 40+C. [* It wasn't clear in the short read I had was if this was per rack of servers or per server.. I am hoping it was per rack.] This seems to be one of those 'oh we don't look at those costs' which get hidden in AI uses X amount of electricity. There is a large amount of water needed to basically turn to outside steam at chiller plants plus other cooling technologies. Now instead of just pumping that water to the sky directly.. why not put a large neighborhood bath and steam house for the people who have to hear the fans 24x7

Well this is interesting

Posted Mar 7, 2026 0:42 UTC (Sat) by malmedal (subscriber, #56172) [Link]

> why not put a large neighborhood bath and steam house

Heating e.g. swimming pools has in fact been done, but the cooling effect of vaporising water is equivalent to heating the water by 540 deg C, so most datacenters want to exploit that.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds