An LLM doesn't *need* vast power.
An LLM doesn't *need* vast power.
Posted Jun 28, 2025 1:51 UTC (Sat) by gmatht (guest, #58961)In reply to: An LLM is... by dskoll
Parent article: Supporting kernel development with large language models
I have used TabbyML happily on an integrated GPU. Obviously there is a trade off with the size of the model, but it is quite possible to use LLMs locally without significantly impairing your battery life.