As LLMs have grown in popularity, the ability to run them locally has also become somewhat sought after. And it's not always easy, as the raw power required for running a lot of language models isn't ...