|
|
Log in / Subscribe / Register

Portable LLMs with llamafile

Portable LLMs with llamafile

Posted May 15, 2024 14:26 UTC (Wed) by flussence (guest, #85566)
In reply to: Portable LLMs with llamafile by taladar
Parent article: Portable LLMs with llamafile

Does *anything* work in ROCm? My impression of it over the past 5-10 years, from the volume of complaints on the internet about it, has been that it's the best advertising campaign Nvidia could've wished for.


to post comments

Portable LLMs with llamafile

Posted May 18, 2024 9:42 UTC (Sat) by Felix (subscriber, #36445) [Link]

At least on my system (Fedora 40), I can run simple models like the llama3-8b using llamafile+rocm and I see a pretty decent speedup when using the GPU. I'm using the rocm packages provided by Fedora so I think the situation is not that bad even though there is a lot of things which are not great (e.g. support for more GPUs, more AMD work regarding distro integration, ...).


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds