|
|
Subscribe / Log in / New account

Gentoo bans AI-created contributions

Gentoo bans AI-created contributions

Posted Apr 18, 2024 18:00 UTC (Thu) by snajpa (subscriber, #73467)
In reply to: Gentoo bans AI-created contributions by snajpa
Parent article: Gentoo bans AI-created contributions

(*and* I got three RTX 3090 sitting around here just so that I can play around these so-called improved autocompletes :D weren't even that expensive, 2nd hand from a miner)


to post comments

Gentoo bans AI-created contributions

Posted Apr 19, 2024 18:30 UTC (Fri) by intelfx (subscriber, #130118) [Link] (1 responses)

> I got three RTX 3090 sitting around here just so that I can play around these so-called improved autocompletes

Is there anything of that sort (I mean, LLM-powered code assistance, Copilot-grade quality) that can actually be used locally? Any pointers?

(There is JetBrains' FLCC which runs on the CPU, but it is really not much better than lexical autocompletion. I'm talking about more powerful models.)

Gentoo bans AI-created contributions

Posted Apr 19, 2024 22:28 UTC (Fri) by snajpa (subscriber, #73467) [Link]

So far the closest to Copilot experience was with phind-codellama-34b-v2.Q4_K_M (GGUF format, llama.cpp and derivatives eat that, fits one 3090; bigger models are too slow to respond IMO) + Twinny extension for VS Code - though next time I get to it (ie. when my ISP has an outage so I have to use flaky backup LTE) I'm going give the Continue ext another shot; phind-codellama-34b-v2.Q4_K_M isn't as good as Copilot, but I haven't tried to modify the prompts the plugins feed to it, from the behavior I get I think there's a lot of room for optimization there.

Outside of code completion, people really ought to try the miqu-1-70b "leak", which can fit onto two 24G cards, to see where the state of the art is (or was, not that long ago) - comparatively to how much resources it needs to run... text generation with this thing is just about the most boring thing once can do, it IMHO doesn't deserve as much attention as it is getting; when we finally get an open- (or at least published-) weights models with those current extended "up to 1M"-class context window sizes, combined with QLoRA, I think people are going to make some amazing things with it. For me, the 32k context size is currently the most limiting factor.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds